Layer-Level Self-Exposure and Patch: Affirmative Token Mitigation for Jailbreak Attack Defense

📅 2025-01-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the security vulnerability of large language models (LLMs) to jailbreak attacks and their consequent generation of harmful outputs, this paper proposes Layer-AdvPatcher—a layer-level adversarial defense method. It introduces a novel layer-sensitivity self-exposure mechanism to identify fragile layers overly sensitive to “affirmative tokens,” and integrates adversarial sample self-amplification with token-level unlearning to selectively weaken attack pathways via localized fine-tuning while preserving original model functionality. Evaluated on two mainstream LLMs, four major safety benchmarks, and diverse state-of-the-art jailbreak attacks, Layer-AdvPatcher significantly reduces both attack success rates and harmful output rates without degrading performance on benign queries. Its core contribution lies in pioneering the first layer-granular, interpretable, localizable, and repairable adversarial safety hardening framework for LLMs.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) are increasingly deployed in diverse applications, including chatbot assistants and code generation, aligning their behavior with safety and ethical standards has become paramount. However, jailbreak attacks, which exploit vulnerabilities to elicit unintended or harmful outputs, threaten LLMs' safety significantly. In this paper, we introduce Layer-AdvPatcher, a novel methodology designed to defend against jailbreak attacks by utilizing an unlearning strategy to patch specific layers within LLMs through self-augmented datasets. Our insight is that certain layer(s), tend to produce affirmative tokens when faced with harmful prompts. By identifying these layers and adversarially exposing them to generate more harmful data, one can understand their inherent and diverse vulnerabilities to attacks. With these exposures, we then"unlearn"these issues, reducing the impact of affirmative tokens and hence minimizing jailbreak risks while keeping the model's responses to safe queries intact. We conduct extensive experiments on two models, four benchmark datasets, and multiple state-of-the-art jailbreak benchmarks to demonstrate the efficacy of our approach. Results indicate that our framework reduces the harmfulness and attack success rate of jailbreak attacks without compromising utility for benign queries compared to recent defense methods.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Jailbreak Attacks
Security and Ethics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Layer-AdvPatcher
unlearning strategy
jailbreak defense