🤖 AI Summary
Large language models (LLMs) exhibit significant safety-alignment vulnerabilities under jailbreak attacks.
Method: This paper proposes a novel paradigm centered on **Denial-Suppression Normalization (DSN)**, shifting focus from optimizing harmful outputs to suppressing refusal responses. It first identifies the suboptimality of standard target loss in adversarial settings and introduces a gradient-based prompt optimization framework. This framework integrates natural language inference (NLI)-based contradiction detection with dual-LLM collaborative evaluation to construct a robust Ensemble Evaluation mechanism.
Contribution/Results: DSN achieves substantially higher attack success rates than state-of-the-art methods across multiple mainstream closed- and open-source LLMs. The Ensemble Evaluation mechanism reduces false positive rate by 37% and false negative rate by 52%, markedly improving assessment accuracy. This work establishes an interpretable, generalizable paradigm for both attacking and evaluating LLM safety alignment.
📝 Abstract
Ensuring the safety alignment of Large Language Models (LLMs) is crucial to generating responses consistent with human values. Despite their ability to recognize and avoid harmful queries, LLMs are vulnerable to jailbreaking attacks, where carefully crafted prompts seduce them to produce toxic content. One category of jailbreak attacks is reformulating the task as an optimization by eliciting the LLM to generate affirmative responses. However, such optimization objective has its own limitations, such as the restriction on the predefined objectionable behaviors, leading to suboptimal attack performance. In this study, we first uncover the reason why vanilla target loss is not optimal, then we explore and enhance the loss objective and introduce the DSN (Don't Say No) attack, which achieves successful attack by suppressing refusal. Another challenge in studying jailbreak attacks is the evaluation, as it is difficult to directly and accurately assess the harmfulness of the responses. The existing evaluation such as refusal keyword matching reveals numerous false positive and false negative instances. To overcome this challenge, we propose an Ensemble Evaluation pipeline that novelly incorporates Natural Language Inference (NLI) contradiction assessment and two external LLM evaluators. Extensive experiments demonstrate the potential of the DSN and effectiveness of Ensemble Evaluation compared to baseline methods.