🤖 AI Summary
This paper addresses the security vulnerability of large language models (LLMs) to jailbreaking attacks and harmful content generation. We propose DEEPALIGN, a dynamic, layer-wise purification defense framework. Methodologically, it integrates three key components: (1) a novel hidden-state-based hybrid loss function enabling real-time toxicity detection and suppression during generation; (2) a redefined notion of “safe responses” as semantically relevant yet harmless outputs, enhancing robustness against representation-variant jailbreaks; and (3) joint optimization of hidden states, progressive detoxification fine-tuning, and dynamic context-aware alignment. Evaluated across six prevalent jailbreaking attack categories, DEEPALIGN reduces attack success rates by up to two orders of magnitude—significantly outperforming state-of-the-art defenses—while preserving the model’s original task performance with no measurable degradation.
📝 Abstract
Large Language Models (LLMs) are vulnerable to jailbreak attacks, which use crafted prompts to elicit toxic responses. These attacks exploit LLMs' difficulty in dynamically detecting harmful intents during the generation process. Traditional safety alignment methods, often relying on the initial few generation steps, are ineffective due to limited computational budget. This paper proposes DEEPALIGN, a robust defense framework that fine-tunes LLMs to progressively detoxify generated content, significantly improving both the computational budget and effectiveness of mitigating harmful generation. Our approach uses a hybrid loss function operating on hidden states to directly improve LLMs' inherent awareness of toxity during generation. Furthermore, we redefine safe responses by generating semantically relevant answers to harmful queries, thereby increasing robustness against representation-mutation attacks. Evaluations across multiple LLMs demonstrate state-of-the-art defense performance against six different attack types, reducing Attack Success Rates by up to two orders of magnitude compared to previous state-of-the-art defense while preserving utility. This work advances LLM safety by addressing limitations of conventional alignment through dynamic, context-aware mitigation.