Latent Adversarial Training Improves Robustness to Persistent Harmful Behaviors in LLMs

📅 2024-07-22
📈 Citations: 72
Influential: 13
📄 PDF
🤖 AI Summary
Large language models (LLMs) often retain persistent harmful behaviors—such as jailbreaking, triggerless backdoor activation, and harmful knowledge rebound—even after fine-tuning. Method: This paper proposes Latent-space Adversarial Training (LAT), the first targeted extension of latent-space adversarial training, which jointly performs model editing and unlearning via latent-layer activation perturbations and multi-task loss optimization—without requiring prior knowledge of triggers or retraining supervision. Contribution/Results: LAT achieves unified, robust defense against diverse harmful behaviors. Experiments demonstrate that LAT significantly improves jailbreak resistance—outperforming the R2D2 baseline while reducing computational overhead by several orders of magnitude—fully eradicates triggerless backdoors, and enhances irreversible forgetting and robustness against reacquisition of harmful knowledge.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) can often be made to behave in undesirable ways that they are explicitly fine-tuned not to. For example, the LLM red-teaming literature has produced a wide variety of 'jailbreaking' techniques to elicit harmful text from models that were fine-tuned to be harmless. Recent work on red-teaming, model editing, and interpretability suggests that this challenge stems from how (adversarial) fine-tuning largely serves to suppress rather than remove undesirable capabilities from LLMs. Prior work has introduced latent adversarial training (LAT) as a way to improve robustness to broad classes of failures. These prior works have considered untargeted latent space attacks where the adversary perturbs latent activations to maximize loss on examples of desirable behavior. Untargeted LAT can provide a generic type of robustness but does not leverage information about specific failure modes. Here, we experiment with targeted LAT where the adversary seeks to minimize loss on a specific competing task. We find that it can augment a wide variety of state-of-the-art methods. First, we use targeted LAT to improve robustness to jailbreaks, outperforming a strong R2D2 baseline with orders of magnitude less compute. Second, we use it to more effectively remove backdoors with no knowledge of the trigger. Finally, we use it to more effectively unlearn knowledge for specific undesirable tasks in a way that is also more robust to re-learning. Overall, our results suggest that targeted LAT can be an effective tool for defending against harmful behaviors from LLMs.
Problem

Research questions and friction points this paper is trying to address.

Improves robustness against LLM jailbreaking techniques
Removes backdoors without trigger knowledge
Enhances unlearning of undesirable tasks robustly
Innovation

Methods, ideas, or system contributions that make the work stand out.

Targeted LAT minimizes loss on specific competing tasks
Targeted LAT improves robustness to jailbreaks efficiently
Targeted LAT removes backdoors without trigger knowledge
🔎 Similar Papers
No similar papers found.