JADES: A Universal Framework for Jailbreak Assessment via Decompositional Scoring

📅 2025-08-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing jailbreak evaluation methods rely on coarse-grained proxy metrics or holistic response judgments, leading to frequent misclassifications and substantial divergence from human assessments. This work proposes JADES—the first general-purpose jailbreak evaluation framework based on problem decomposition: it automatically decomposes harmful queries into weighted sub-questions, independently scores each sub-answer, and aggregates scores via learned weights; optionally integrating a fact-checking module to mitigate hallucination. This design jointly optimizes accuracy, interpretability, and robustness. Evaluated on the JailbreakQR benchmark, JADES achieves 98.5% agreement with human annotations in binary classification—outperforming the strongest baseline by 9 percentage points—and reveals that prevailing jailbreak attack success rates are systematically overestimated.

Technology Category

Application Category

📝 Abstract
Accurately determining whether a jailbreak attempt has succeeded is a fundamental yet unresolved challenge. Existing evaluation methods rely on misaligned proxy indicators or naive holistic judgments. They frequently misinterpret model responses, leading to inconsistent and subjective assessments that misalign with human perception. To address this gap, we introduce JADES (Jailbreak Assessment via Decompositional Scoring), a universal jailbreak evaluation framework. Its key mechanism is to automatically decompose an input harmful question into a set of weighted sub-questions, score each sub-answer, and weight-aggregate the sub-scores into a final decision. JADES also incorporates an optional fact-checking module to strengthen the detection of hallucinations in jailbreak responses. We validate JADES on JailbreakQR, a newly introduced benchmark proposed in this work, consisting of 400 pairs of jailbreak prompts and responses, each meticulously annotated by humans. In a binary setting (success/failure), JADES achieves 98.5% agreement with human evaluators, outperforming strong baselines by over 9%. Re-evaluating five popular attacks on four LLMs reveals substantial overestimation (e.g., LAA's attack success rate on GPT-3.5-Turbo drops from 93% to 69%). Our results show that JADES could deliver accurate, consistent, and interpretable evaluations, providing a reliable basis for measuring future jailbreak attacks.
Problem

Research questions and friction points this paper is trying to address.

Accurately determining jailbreak success remains unresolved
Existing methods use misaligned proxies and subjective judgments
Current evaluations frequently misinterpret model responses inconsistently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes harmful questions into weighted sub-questions
Scores and aggregates sub-answers for final decision
Incorporates fact-checking module to detect hallucinations
🔎 Similar Papers
No similar papers found.
Junjie Chu
Junjie Chu
CISPA Helmholtz Center for Information Security
AI Security and PrivacyComputational Social Science
M
Mingjie Li
CISPA Helmholtz Center for Information Security
Z
Ziqing Yang
CISPA Helmholtz Center for Information Security
Y
Ye Leng
CISPA Helmholtz Center for Information Security
C
Chenhao Lin
Xi’an Jiaotong University
C
Chao Shen
Xi’an Jiaotong University
Michael Backes
Michael Backes
Chairman and Founding Director of the CISPA Helmholtz Center for Information Security
SecurityprivacycryptographyAI
Yun Shen
Yun Shen
Flexera
AI/ML for SecurityAI/ML SafetyCloud Computing
Y
Yang Zhang
CISPA Helmholtz Center for Information Security