Beyond Surface Alignment: Rebuilding LLMs Safety Mechanism via Probabilistically Ablating Refusal Direction

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) remain vulnerable to jailbreaking attacks, while existing safety alignment methods suffer from insufficient depth and weak intrinsic robustness against adversarial inputs. Method: This paper proposes a Deep Rejection Mechanism Reconstruction Framework, introducing the first probabilistic rejection-direction ablation mechanism. During fine-tuning, it dynamically ablates adversarial activations—across layers and per-token—along both prefill and rejection directions, enabling adaptive reconstruction of refusal behavior under jailbreaking conditions. The framework integrates hierarchical probabilistic ablation, dynamic refusal reconstruction, fine-grained parameter updates, and multi-level refusal signal coordination to transcend superficial alignment. Results: Evaluated on four open-source LLMs against six representative jailbreaking attack types, the method reduces attack success rates by 95% on average, preserves original model capabilities nearly intact, and significantly enhances generalization to unseen attacks.

Technology Category

Application Category

📝 Abstract
Jailbreak attacks pose persistent threats to large language models (LLMs). Current safety alignment methods have attempted to address these issues, but they experience two significant limitations: insufficient safety alignment depth and unrobust internal defense mechanisms. These limitations make them vulnerable to adversarial attacks such as prefilling and refusal direction manipulation. We introduce DeepRefusal, a robust safety alignment framework that overcomes these issues. DeepRefusal forces the model to dynamically rebuild its refusal mechanisms from jailbreak states. This is achieved by probabilistically ablating the refusal direction across layers and token depths during fine-tuning. Our method not only defends against prefilling and refusal direction attacks but also demonstrates strong resilience against other unseen jailbreak strategies. Extensive evaluations on four open-source LLM families and six representative attacks show that DeepRefusal reduces attack success rates by approximately 95%, while maintaining model capabilities with minimal performance degradation.
Problem

Research questions and friction points this paper is trying to address.

Addressing insufficient safety alignment depth in LLMs
Strengthening unrobust internal defense mechanisms against attacks
Mitigating vulnerability to prefilling and refusal direction manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probabilistically ablating refusal direction across layers
Dynamically rebuilds refusal mechanisms from jailbreak states
Defends against prefilling and refusal direction attacks
🔎 Similar Papers
No similar papers found.
Y
Yuanbo Xie
Institute of Information Engineering, Chinese Academy of Sciences, China, School of Cyber Security, University of Chinese Academy of Sciences, China
Y
Yingjie Zhang
Institute of Information Engineering, Chinese Academy of Sciences, China, School of Cyber Security, University of Chinese Academy of Sciences, China
T
Tianyun Liu
Institute of Information Engineering, Chinese Academy of Sciences, China, School of Cyber Security, University of Chinese Academy of Sciences, China
Duohe Ma
Duohe Ma
Associate Professor
Moving Target DefenseInformation SecurityNetwork SecurityCloud SecurityData Security
Tingwen Liu
Tingwen Liu
Institute of Information Engineering, Chinese Academy of Sciences
Content SecurityNatural Language ProcessingKnowledge Graph