ALERT: Zero-shot LLM Jailbreak Detection via Internal Discrepancy Amplification

๐Ÿ“… 2026-01-07
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 2
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of detecting novel jailbreak attacks against large language models under zero-shot settings, where existing methods often fail to generalize. The authors propose a multi-granularity internal representation discrepancy amplification framework that systematically identifies security-critical components by analyzing model activations at the layer, module, and token levels. By integrating hierarchical, modular, and token-wise feature enhancement mechanisms and employing two complementary classifiers for joint decision-making, the approach effectively uncovers the modelโ€™s intrinsic discriminative safety signals. Evaluated on three mainstream safety benchmarks, the method consistently ranks among the top two, achieving average accuracy and F1 score improvements of 10%โ€“40% over the strongest baselinesโ€”the first to demonstrate high-precision jailbreak detection in a zero-shot scenario.

Technology Category

Application Category

๐Ÿ“ Abstract
Despite rich safety alignment strategies, large language models (LLMs) remain highly susceptible to jailbreak attacks, which compromise safety guardrails and pose serious security risks. Existing detection methods mainly detect jailbreak status relying on jailbreak templates present in the training data. However, few studies address the more realistic and challenging zero-shot jailbreak detection setting, where no jailbreak templates are available during training. This setting better reflects real-world scenarios where new attacks continually emerge and evolve. To address this challenge, we propose a layer-wise, module-wise, and token-wise amplification framework that progressively magnifies internal feature discrepancies between benign and jailbreak prompts. We uncover safety-relevant layers, identify specific modules that inherently encode zero-shot discriminative signals, and localize informative safety tokens. Building upon these insights, we introduce ALERT (Amplification-based Jailbreak Detector), an efficient and effective zero-shot jailbreak detector that introduces two independent yet complementary classifiers on amplified representations. Extensive experiments on three safety benchmarks demonstrate that ALERT achieves consistently strong zero-shot detection performance. Specifically, (i) across all datasets and attack strategies, ALERT reliably ranks among the top two methods, and (ii) it outperforms the second-best baseline by at least 10% in average Accuracy and F1-score, and sometimes by up to 40%.
Problem

Research questions and friction points this paper is trying to address.

jailbreak detection
zero-shot
large language models
safety alignment
adversarial attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

zero-shot jailbreak detection
internal discrepancy amplification
safety alignment
large language models
feature representation analysis
๐Ÿ”Ž Similar Papers
No similar papers found.