SafeAligner: Safety Alignment against Jailbreak Attacks via Response Disparity Guidance

📅 2024-06-26
🏛️ arXiv.org
📈 Citations: 3
Influential: 1
📄 PDF
🤖 AI Summary
To address the failure of safety alignment in large language models (LLMs) under jailbreak attacks, this paper proposes a decoding-phase safety enhancement method. The approach employs a dual-model architecture comprising a sentinel model and an intruder model; by contrasting their response-level safety scores, it dynamically reweights and guides the token-level output distribution in real time—without fine-tuning the target LLM. Its core innovation is the first proposed *response-difference guidance mechanism*, which achieves high generalizability, low computational overhead, and strong adaptability across diverse attack types. Experiments demonstrate that the method achieves over 92% defense success rate across multiple benchmarks, significantly increasing the probability of beneficial tokens while suppressing harmful outputs. Crucially, it incurs negligible degradation in general task performance.

Technology Category

Application Category

📝 Abstract
As the development of large language models (LLMs) rapidly advances, securing these models effectively without compromising their utility has become a pivotal area of research. However, current defense strategies against jailbreak attacks (i.e., efforts to bypass security protocols) often suffer from limited adaptability, restricted general capability, and high cost. To address these challenges, we introduce SafeAligner, a methodology implemented at the decoding stage to fortify defenses against jailbreak attacks. We begin by developing two specialized models: the Sentinel Model, which is trained to foster safety, and the Intruder Model, designed to generate riskier responses. SafeAligner leverages the disparity in security levels between the responses from these models to differentiate between harmful and beneficial tokens, effectively guiding the safety alignment by altering the output token distribution of the target model. Extensive experiments show that SafeAligner can increase the likelihood of beneficial tokens, while reducing the occurrence of harmful ones, thereby ensuring secure alignment with minimal loss to generality.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Security
Cost-effectiveness
Innovation

Methods, ideas, or system contributions that make the work stand out.

SafeAligner
Sentinel model
Adversary model
Caishuang Huang
Caishuang Huang
Fudan University
LLM、RLHF、Tool Learning
W
Wanxu Zhao
School of Computer Science, Fudan University, Shanghai, China
R
Rui Zheng
School of Computer Science, Fudan University, Shanghai, China
H
Huijie Lv
School of Computer Science, Fudan University, Shanghai, China
Shihan Dou
Shihan Dou
Fudan University
LLMsCode LMsRLAlignment
Sixian Li
Sixian Li
Master's degree student,Fudan University
NLP
X
Xiao Wang
School of Computer Science, Fudan University, Shanghai, China
E
Enyu Zhou
School of Computer Science, Fudan University, Shanghai, China
J
Junjie Ye
School of Computer Science, Fudan University, Shanghai, China
Yuming Yang
Yuming Yang
Fudan University
Natural Language ProcessingLarge Language Models
T
Tao Gui
Institute of Modern Languages and Linguistics, Fudan University, Shanghai, China
Q
Qi Zhang
Institute of Modern Languages and Linguistics, Fudan University, Shanghai, China
X
Xuanjing Huang
School of Computer Science, Fudan University, Shanghai, China