AISA: Awakening Intrinsic Safety Awareness in Large Language Models against Jailbreak Attacks

📅 2026-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a lightweight defense against jailbreak attacks on large language models that requires no model parameter modification, auxiliary modules, or multi-round inference. By leveraging a single forward pass, the method identifies strong intent-discriminative signals embedded in the scaled dot-product outputs of specific attention heads preceding structured tokens. Through spatiotemporal analysis, security-relevant attention heads are localized to extract a prompt-level risk score, which dynamically modulates the decoding distribution at the logits level to adaptively reject high-risk requests. Extensive experiments across 13 datasets, 12 large language models, and 14 baselines demonstrate that the approach significantly enhances robustness and generalization while preserving model utility and reducing false rejection rates.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) remain vulnerable to jailbreak prompts that elicit harmful or policy-violating outputs, while many existing defenses rely on expensive fine-tuning, intrusive prompt rewriting, or external guardrails that add latency and can degrade helpfulness. We present AISA, a lightweight, single-pass defense that activates safety behaviors already latent inside the model rather than treating safety as an add-on. AISA first localizes intrinsic safety awareness via spatiotemporal analysis and shows that intent-discriminative signals are broadly encoded, with especially strong separability appearing in the scaled dot-product outputs of specific attention heads near the final structural tokens before generation. Using a compact set of automatically selected heads, AISA extracts an interpretable prompt-risk score with minimal overhead, achieving detector-level performance competitive with strong proprietary baselines on small (7B) models. AISA then performs logits-level steering: it modulates the decoding distribution in proportion to the inferred risk, ranging from normal generation for benign prompts to calibrated refusal for high-risk requests -- without changing model parameters, adding auxiliary modules, or requiring multi-pass inference. Extensive experiments spanning 13 datasets, 12 LLMs, and 14 baselines demonstrate that AISA improves robustness and transfer while preserving utility and reducing false refusals, enabling safer deployment even for weakly aligned or intentionally risky model variants.
Problem

Research questions and friction points this paper is trying to address.

jailbreak attacks
large language models
safety alignment
prompt vulnerability
harmful outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

intrinsic safety awareness
jailbreak defense
attention head analysis
logits-level steering
lightweight LLM safety
🔎 Similar Papers
2024-07-01Conference on Empirical Methods in Natural Language ProcessingCitations: 2
2024-01-12International Conference on Computational LinguisticsCitations: 11
W
Weiming Song
Beijing University of Technology
Xuan Xie
Xuan Xie
Macau University of Science and Technology
Trustworthy LLMCyber Physical SystemNeural Network Verification
R
Ruiping Yin
Beijing University of Technology