🤖 AI Summary
Despite alignment efforts, large language models remain vulnerable to jailbreaking attacks, while existing detection methods incur substantial computational overhead. To address this, we propose FJD—a lightweight, zero-overhead jailbreak detection method that imposes no additional inference cost. FJD exploits distributional disparities in the confidence scores of the first-token logits between jailbreaking and benign prompts. It enhances discriminative power via three key techniques: temperature-scaled logit calibration, affirmative instruction prefixing, and virtual instruction learning—enabling robust first-token confidence analysis. Crucially, FJD requires only a single forward pass and introduces no auxiliary models or repeated inference steps, achieving real-time detection for the first time. Experiments across major aligned models—including Llama-3-8B-Instruct and Qwen2-7B-Instruct—demonstrate FJD’s >95% detection accuracy with negligible latency overhead (<0.5 ms), significantly outperforming existing baselines.
📝 Abstract
Large language models (LLMs) enhance security through alignment when widely used, but remain susceptible to jailbreak attacks capable of producing inappropriate content. Jailbreak detection methods show promise in mitigating jailbreak attacks through the assistance of other models or multiple model inferences. However, existing methods entail significant computational costs. In this paper, we first present a finding that the difference in output distributions between jailbreak and benign prompts can be employed for detecting jailbreak prompts. Based on this finding, we propose a Free Jailbreak Detection (FJD) which prepends an affirmative instruction to the input and scales the logits by temperature to further distinguish between jailbreak and benign prompts through the confidence of the first token. Furthermore, we enhance the detection performance of FJD through the integration of virtual instruction learning. Extensive experiments on aligned LLMs show that our FJD can effectively detect jailbreak prompts with almost no additional computational costs during LLM inference.