🤖 AI Summary
Large language models (LLMs) remain vulnerable to jailbreaking attacks, while existing safety alignment methods suffer from insufficient depth and weak intrinsic robustness against adversarial inputs. Method: This paper proposes a Deep Rejection Mechanism Reconstruction Framework, introducing the first probabilistic rejection-direction ablation mechanism. During fine-tuning, it dynamically ablates adversarial activations—across layers and per-token—along both prefill and rejection directions, enabling adaptive reconstruction of refusal behavior under jailbreaking conditions. The framework integrates hierarchical probabilistic ablation, dynamic refusal reconstruction, fine-grained parameter updates, and multi-level refusal signal coordination to transcend superficial alignment. Results: Evaluated on four open-source LLMs against six representative jailbreaking attack types, the method reduces attack success rates by 95% on average, preserves original model capabilities nearly intact, and significantly enhances generalization to unseen attacks.
📝 Abstract
Jailbreak attacks pose persistent threats to large language models (LLMs). Current safety alignment methods have attempted to address these issues, but they experience two significant limitations: insufficient safety alignment depth and unrobust internal defense mechanisms. These limitations make them vulnerable to adversarial attacks such as prefilling and refusal direction manipulation. We introduce DeepRefusal, a robust safety alignment framework that overcomes these issues. DeepRefusal forces the model to dynamically rebuild its refusal mechanisms from jailbreak states. This is achieved by probabilistically ablating the refusal direction across layers and token depths during fine-tuning. Our method not only defends against prefilling and refusal direction attacks but also demonstrates strong resilience against other unseen jailbreak strategies. Extensive evaluations on four open-source LLM families and six representative attacks show that DeepRefusal reduces attack success rates by approximately 95%, while maintaining model capabilities with minimal performance degradation.