Safety Alignment via Constrained Knowledge Unlearning

📅 2025-05-24
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM safety alignment methods fail to thoroughly erase harmful knowledge, rendering jailbreaking attacks highly effective. Method: We propose Constraint-based Knowledge Unlearning (CKU), the first approach integrating knowledge localization with constrained gradient clipping: critical MLP neurons are identified via neuron importance scoring, their gradients are selectively masked, and a knowledge-preserving sparse unlearning mechanism is introduced. Contribution: CKU enables fine-grained, interpretable knowledge editing; uncovers layer-wise sensitivity patterns of MLP modules to safety-critical knowledge; achieves a 23.6% improvement in defense success rate across multiple jailbreaking benchmarks, while incurring only a 0.4% degradation in general capabilities (e.g., MMLU), significantly outperforming state-of-the-art alignment methods.

Technology Category

Application Category

📝 Abstract
Despite significant progress in safety alignment, large language models (LLMs) remain susceptible to jailbreak attacks. Existing defense mechanisms have not fully deleted harmful knowledge in LLMs, which allows such attacks to bypass safeguards and produce harmful outputs. To address this challenge, we propose a novel safety alignment strategy, Constrained Knowledge Unlearning (CKU), which focuses on two primary objectives: knowledge localization and retention, and unlearning harmful knowledge. CKU works by scoring neurons in specific multilayer perceptron (MLP) layers to identify a subset U of neurons associated with useful knowledge. During the unlearning process, CKU prunes the gradients of neurons in U to preserve valuable knowledge while effectively mitigating harmful content. Experimental results demonstrate that CKU significantly enhances model safety without compromising overall performance, offering a superior balance between safety and utility compared to existing methods. Additionally, our analysis of neuron knowledge sensitivity across various MLP layers provides valuable insights into the mechanics of safety alignment and model knowledge editing.
Problem

Research questions and friction points this paper is trying to address.

LLMs remain vulnerable to jailbreak attacks
Existing defenses fail to fully erase harmful knowledge
CKU unlearns harmful content while preserving useful knowledge
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constrained Knowledge Unlearning for safety alignment
Prunes harmful knowledge while preserving useful neurons
Scores neurons in MLP layers for targeted unlearning
🔎 Similar Papers
No similar papers found.