🤖 AI Summary
Existing jailbreak evaluation methods rely on coarse-grained proxy metrics or holistic response judgments, leading to frequent misclassifications and substantial divergence from human assessments. This work proposes JADES—the first general-purpose jailbreak evaluation framework based on problem decomposition: it automatically decomposes harmful queries into weighted sub-questions, independently scores each sub-answer, and aggregates scores via learned weights; optionally integrating a fact-checking module to mitigate hallucination. This design jointly optimizes accuracy, interpretability, and robustness. Evaluated on the JailbreakQR benchmark, JADES achieves 98.5% agreement with human annotations in binary classification—outperforming the strongest baseline by 9 percentage points—and reveals that prevailing jailbreak attack success rates are systematically overestimated.
📝 Abstract
Accurately determining whether a jailbreak attempt has succeeded is a fundamental yet unresolved challenge. Existing evaluation methods rely on misaligned proxy indicators or naive holistic judgments. They frequently misinterpret model responses, leading to inconsistent and subjective assessments that misalign with human perception. To address this gap, we introduce JADES (Jailbreak Assessment via Decompositional Scoring), a universal jailbreak evaluation framework. Its key mechanism is to automatically decompose an input harmful question into a set of weighted sub-questions, score each sub-answer, and weight-aggregate the sub-scores into a final decision. JADES also incorporates an optional fact-checking module to strengthen the detection of hallucinations in jailbreak responses. We validate JADES on JailbreakQR, a newly introduced benchmark proposed in this work, consisting of 400 pairs of jailbreak prompts and responses, each meticulously annotated by humans. In a binary setting (success/failure), JADES achieves 98.5% agreement with human evaluators, outperforming strong baselines by over 9%. Re-evaluating five popular attacks on four LLMs reveals substantial overestimation (e.g., LAA's attack success rate on GPT-3.5-Turbo drops from 93% to 69%). Our results show that JADES could deliver accurate, consistent, and interpretable evaluations, providing a reliable basis for measuring future jailbreak attacks.