๐ค AI Summary
This work addresses the challenge of detecting novel jailbreak attacks against large language models under zero-shot settings, where existing methods often fail to generalize. The authors propose a multi-granularity internal representation discrepancy amplification framework that systematically identifies security-critical components by analyzing model activations at the layer, module, and token levels. By integrating hierarchical, modular, and token-wise feature enhancement mechanisms and employing two complementary classifiers for joint decision-making, the approach effectively uncovers the modelโs intrinsic discriminative safety signals. Evaluated on three mainstream safety benchmarks, the method consistently ranks among the top two, achieving average accuracy and F1 score improvements of 10%โ40% over the strongest baselinesโthe first to demonstrate high-precision jailbreak detection in a zero-shot scenario.
๐ Abstract
Despite rich safety alignment strategies, large language models (LLMs) remain highly susceptible to jailbreak attacks, which compromise safety guardrails and pose serious security risks. Existing detection methods mainly detect jailbreak status relying on jailbreak templates present in the training data. However, few studies address the more realistic and challenging zero-shot jailbreak detection setting, where no jailbreak templates are available during training. This setting better reflects real-world scenarios where new attacks continually emerge and evolve. To address this challenge, we propose a layer-wise, module-wise, and token-wise amplification framework that progressively magnifies internal feature discrepancies between benign and jailbreak prompts. We uncover safety-relevant layers, identify specific modules that inherently encode zero-shot discriminative signals, and localize informative safety tokens. Building upon these insights, we introduce ALERT (Amplification-based Jailbreak Detector), an efficient and effective zero-shot jailbreak detector that introduces two independent yet complementary classifiers on amplified representations. Extensive experiments on three safety benchmarks demonstrate that ALERT achieves consistently strong zero-shot detection performance. Specifically, (i) across all datasets and attack strategies, ALERT reliably ranks among the top two methods, and (ii) it outperforms the second-best baseline by at least 10% in average Accuracy and F1-score, and sometimes by up to 40%.