🤖 AI Summary
This work proposes a lightweight defense against jailbreak attacks on large language models that requires no model parameter modification, auxiliary modules, or multi-round inference. By leveraging a single forward pass, the method identifies strong intent-discriminative signals embedded in the scaled dot-product outputs of specific attention heads preceding structured tokens. Through spatiotemporal analysis, security-relevant attention heads are localized to extract a prompt-level risk score, which dynamically modulates the decoding distribution at the logits level to adaptively reject high-risk requests. Extensive experiments across 13 datasets, 12 large language models, and 14 baselines demonstrate that the approach significantly enhances robustness and generalization while preserving model utility and reducing false rejection rates.
📝 Abstract
Large language models (LLMs) remain vulnerable to jailbreak prompts that elicit harmful or policy-violating outputs, while many existing defenses rely on expensive fine-tuning, intrusive prompt rewriting, or external guardrails that add latency and can degrade helpfulness. We present AISA, a lightweight, single-pass defense that activates safety behaviors already latent inside the model rather than treating safety as an add-on. AISA first localizes intrinsic safety awareness via spatiotemporal analysis and shows that intent-discriminative signals are broadly encoded, with especially strong separability appearing in the scaled dot-product outputs of specific attention heads near the final structural tokens before generation. Using a compact set of automatically selected heads, AISA extracts an interpretable prompt-risk score with minimal overhead, achieving detector-level performance competitive with strong proprietary baselines on small (7B) models. AISA then performs logits-level steering: it modulates the decoding distribution in proportion to the inferred risk, ranging from normal generation for benign prompts to calibrated refusal for high-risk requests -- without changing model parameters, adding auxiliary modules, or requiring multi-pass inference. Extensive experiments spanning 13 datasets, 12 LLMs, and 14 baselines demonstrate that AISA improves robustness and transfer while preserving utility and reducing false refusals, enabling safer deployment even for weakly aligned or intentionally risky model variants.