🤖 AI Summary
Existing LLM safety alignment methods fail to thoroughly erase harmful knowledge, rendering jailbreaking attacks highly effective.
Method: We propose Constraint-based Knowledge Unlearning (CKU), the first approach integrating knowledge localization with constrained gradient clipping: critical MLP neurons are identified via neuron importance scoring, their gradients are selectively masked, and a knowledge-preserving sparse unlearning mechanism is introduced.
Contribution: CKU enables fine-grained, interpretable knowledge editing; uncovers layer-wise sensitivity patterns of MLP modules to safety-critical knowledge; achieves a 23.6% improvement in defense success rate across multiple jailbreaking benchmarks, while incurring only a 0.4% degradation in general capabilities (e.g., MMLU), significantly outperforming state-of-the-art alignment methods.
📝 Abstract
Despite significant progress in safety alignment, large language models (LLMs) remain susceptible to jailbreak attacks. Existing defense mechanisms have not fully deleted harmful knowledge in LLMs, which allows such attacks to bypass safeguards and produce harmful outputs. To address this challenge, we propose a novel safety alignment strategy, Constrained Knowledge Unlearning (CKU), which focuses on two primary objectives: knowledge localization and retention, and unlearning harmful knowledge. CKU works by scoring neurons in specific multilayer perceptron (MLP) layers to identify a subset U of neurons associated with useful knowledge. During the unlearning process, CKU prunes the gradients of neurons in U to preserve valuable knowledge while effectively mitigating harmful content. Experimental results demonstrate that CKU significantly enhances model safety without compromising overall performance, offering a superior balance between safety and utility compared to existing methods. Additionally, our analysis of neuron knowledge sensitivity across various MLP layers provides valuable insights into the mechanics of safety alignment and model knowledge editing.