Align in Depth: Defending Jailbreak Attacks via Progressive Answer Detoxification

📅 2025-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the security vulnerability of large language models (LLMs) to jailbreaking attacks and harmful content generation. We propose DEEPALIGN, a dynamic, layer-wise purification defense framework. Methodologically, it integrates three key components: (1) a novel hidden-state-based hybrid loss function enabling real-time toxicity detection and suppression during generation; (2) a redefined notion of “safe responses” as semantically relevant yet harmless outputs, enhancing robustness against representation-variant jailbreaks; and (3) joint optimization of hidden states, progressive detoxification fine-tuning, and dynamic context-aware alignment. Evaluated across six prevalent jailbreaking attack categories, DEEPALIGN reduces attack success rates by up to two orders of magnitude—significantly outperforming state-of-the-art defenses—while preserving the model’s original task performance with no measurable degradation.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are vulnerable to jailbreak attacks, which use crafted prompts to elicit toxic responses. These attacks exploit LLMs' difficulty in dynamically detecting harmful intents during the generation process. Traditional safety alignment methods, often relying on the initial few generation steps, are ineffective due to limited computational budget. This paper proposes DEEPALIGN, a robust defense framework that fine-tunes LLMs to progressively detoxify generated content, significantly improving both the computational budget and effectiveness of mitigating harmful generation. Our approach uses a hybrid loss function operating on hidden states to directly improve LLMs' inherent awareness of toxity during generation. Furthermore, we redefine safe responses by generating semantically relevant answers to harmful queries, thereby increasing robustness against representation-mutation attacks. Evaluations across multiple LLMs demonstrate state-of-the-art defense performance against six different attack types, reducing Attack Success Rates by up to two orders of magnitude compared to previous state-of-the-art defense while preserving utility. This work advances LLM safety by addressing limitations of conventional alignment through dynamic, context-aware mitigation.
Problem

Research questions and friction points this paper is trying to address.

Defends jailbreak attacks on Large Language Models (LLMs).
Improves computational budget and effectiveness in detoxifying content.
Redefines safe responses to increase robustness against attacks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive detoxification during LLM generation
Hybrid loss function on hidden states
Semantically relevant safe responses to harmful queries
🔎 Similar Papers
No similar papers found.
Y
Yingjie Zhang
Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China
T
Tong Liu
Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China
Z
Zhe Zhao
Tsinghua University; RealAI
Guozhu Meng
Guozhu Meng
Associate Professor with Chinese Academy of Sciences
mobile securityprogram analysisAI privacy and security
K
Kai Chen
Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China