SelfGrader: Stable Jailbreak Detection for Large Language Models using Token-Level Logits

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of large language models to jailbreak attacks, a challenge exacerbated by existing defenses that often suffer from high latency or unstable detection due to output randomness. To overcome these limitations, the authors propose a lightweight jailbreak detection mechanism that reformulates detection as a scoring task based on numerical tokens (e.g., digits 0–9). The method leverages token-level logits distributions as intrinsic security signals and introduces a dual-perspective scoring rule that jointly accounts for both malicious and benign characteristics. Evaluated on LLaMA-3-8B, the approach reduces attack success rates by up to 22.66%, while achieving a 173-fold reduction in memory overhead and a 26-fold decrease in inference latency. This significantly enhances detection stability and lowers false positive rates compared to prior methods.
📝 Abstract
Large Language Models (LLMs) are powerful tools for answering user queries, yet they remain highly vulnerable to jailbreak attacks. Existing guardrail methods typically rely on internal features or textual responses to detect malicious queries, which either introduce substantial latency or suffer from the randomness in text generation. To overcome these limitations, we propose SelfGrader, a lightweight guardrail method that formulates jailbreak detection as a numerical grading problem using token-level logits. Specifically, SelfGrader evaluates the safety of a user query within a compact set of numerical tokens (NTs) (e.g., 0-9) and interprets their logit distribution as an internal safety signal. To align these signals with human intuition of maliciousness, SelfGrader introduces a dual-perspective scoring rule that considers both the maliciousness and benignness of the query, yielding a stable and interpretable score that reflects harmfulness and reduces the false positive rate simultaneously. Extensive experiments across diverse jailbreak benchmarks, multiple LLMs, and state-of-the-art guardrail baselines demonstrate that SelfGrader achieves up to a 22.66% reduction in ASR on LLaMA-3-8B, while maintaining significantly lower memory overhead (up to 173x) and latency (up to 26x).
Problem

Research questions and friction points this paper is trying to address.

jailbreak detection
large language models
guardrail methods
token-level logits
malicious queries
Innovation

Methods, ideas, or system contributions that make the work stand out.

jailbreak detection
token-level logits
lightweight guardrail
numerical grading
dual-perspective scoring
🔎 Similar Papers
No similar papers found.
Z
Zikai Zhang
Department of Computer Science and Engineering, University of Nevada, Reno, Reno, USA
Rui Hu
Rui Hu
University of Nevada, Reno
Machine LearningSecurity and PrivacyInternet-of-thingsEdge Computing
O
Olivera Kotevska
Oak Ridge National Laboratory, Oak Ridge, USA
Jiahao Xu
Jiahao Xu
Nanyang Technological University
LLM Efficient ReasoningNMTAudio TranslationSentence Embeddings