🤖 AI Summary
Large language models (LLMs) remain vulnerable to jailbreaking attacks, and existing unlearning-based defenses focus solely on explicit harmful knowledge, neglecting implicit interdependencies among harmful concepts. Method: This work identifies and empirically validates the “unlearning ripple effect”: targeted unlearning of one category of harmful knowledge (e.g., burglary procedures) implicitly attenuates untargeted but semantically associated harmful knowledge (e.g., ammunition fabrication). Building upon this, we propose the “Safe Unlearning” framework—a knowledge-level defense that integrates unlearning into safety alignment. It leverages gradient-driven parameter updates and localized knowledge identification, requiring only 20 original harmful queries. Contribution/Results: Evaluated on Vicuna-7B, Safe Unlearning reduces out-of-distribution jailbreak success rate from 82.6% to 7.7%, substantially outperforming Llama2-7B-Chat—fine-tuned on 100K samples and augmented with safety prompts (21.9%). Our results reveal intrinsic representational coupling among harmful knowledge in LLMs.
📝 Abstract
LLMs are known to be vulnerable to jailbreak attacks, even after safety alignment. An important observation is that, while different types of jailbreak attacks can generate significantly different queries, they mostly result in similar responses that are rooted in the same harmful knowledge (e.g., detailed steps to make a bomb). Therefore, we conjecture that directly unlearn the harmful knowledge in the LLM can be a more effective way to defend against jailbreak attacks than the mainstream supervised fine-tuning (SFT) approaches. Our extensive experiments demonstrate the surprising generalizability of our unlearning-based approach: using only 20 raw harmful questions without any jailbreak prompt during training, our solution reduced the Attack Success Rate (ASR) in Vicuna-7B from 82.6% to 7.7% on out-of-distribution (OOD) harmful questions wrapped with various complex jailbreak prompts . This significantly outperforms Llama2-7B-Chat, which is fine-tuned on about 0.1M safety alignment samples but still has an ASR of 21.9% even under the help of an additional safety system prompt. Further analysis reveals that the generalization ability of our solution may stem from the intrinsic relatedness among harmful responses across harmful questions (e.g., response patterns, shared steps and actions in response, and similarity among their learned representations in the LLM). Our code is available at url{https://github.com/thu-coai/SafeUnlearning}.