Why Not Act on What You Know? Unleashing Safety Potential of LLMs via Self-Aware Guard Enhancement

📅 2025-05-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies an intrinsic decoupling between safety discrimination and safe generation in large language models (LLMs) under jailbreaking attacks: although LLMs can detect jailbreaking prompts, they still produce unsafe responses. To address this, we propose Self-Aware Guard Enhancement (SAGE), a training-free, instruction-driven, inference-time intervention framework that unifies discriminative analysis with response generation. SAGE introduces lightweight, interpretable reasoning by dynamically calibrating model outputs via hidden-state and attention-based diagnostics. It is the first method to achieve unified detection–response behavior without retraining. Evaluated across diverse open- and closed-weight LLMs, SAGE achieves an average 99% jailbreak defense success rate while preserving general task performance—significantly outperforming existing zero-shot defense approaches.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have shown impressive capabilities across various tasks but remain vulnerable to meticulously crafted jailbreak attacks. In this paper, we identify a critical safety gap: while LLMs are adept at detecting jailbreak prompts, they often produce unsafe responses when directly processing these inputs. Inspired by this insight, we propose SAGE (Self-Aware Guard Enhancement), a training-free defense strategy designed to align LLMs' strong safety discrimination performance with their relatively weaker safety generation ability. SAGE consists of two core components: a Discriminative Analysis Module and a Discriminative Response Module, enhancing resilience against sophisticated jailbreak attempts through flexible safety discrimination instructions. Extensive experiments demonstrate SAGE's effectiveness and robustness across various open-source and closed-source LLMs of different sizes and architectures, achieving an average 99% defense success rate against numerous complex and covert jailbreak methods while maintaining helpfulness on general benchmarks. We further conduct mechanistic interpretability analysis through hidden states and attention distributions, revealing the underlying mechanisms of this detection-generation discrepancy. Our work thus contributes to developing future LLMs with coherent safety awareness and generation behavior. Our code and datasets are publicly available at https://github.com/NJUNLP/SAGE.
Problem

Research questions and friction points this paper is trying to address.

LLMs detect jailbreak prompts but produce unsafe responses
Align safety discrimination with weak safety generation ability
Enhance resilience against sophisticated jailbreak attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free defense strategy SAGE
Discriminative Analysis and Response Modules
99% defense success rate
🔎 Similar Papers
No similar papers found.