How Real is Your Jailbreak? Fine-grained Jailbreak Evaluation with Anchored Reference

📅 2026-01-04
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical limitation of existing automatic evaluation methods for jailbreaking attacks on large language models, which rely on coarse-grained harm assessments and consequently overestimate attack success rates. To remedy this, we propose FJAR, a novel framework featuring an anchor-based, fine-grained evaluation mechanism. FJAR establishes high-quality reference standards by decomposing model responses into five distinct categories—refusal, irrelevant, unhelpful, incorrect, and successful—and employs harmless tree decomposition to align closely with human judgment. Beyond accurately determining whether a jailbreak truly fulfills malicious intent, FJAR identifies specific failure modes and offers actionable insights for refining attacks. Experimental results demonstrate that FJAR significantly outperforms current methods across multiple metrics and achieves the highest agreement with human evaluators, effectively revealing the genuine efficacy of jailbreaking attempts.

Technology Category

Application Category

📝 Abstract
Jailbreak attacks present a significant challenge to the safety of Large Language Models (LLMs), yet current automated evaluation methods largely rely on coarse classifications that focus mainly on harmfulness, leading to substantial overestimation of attack success. To address this problem, we propose FJAR, a fine-grained jailbreak evaluation framework with anchored references. We first categorized jailbreak responses into five fine-grained categories: Rejective, Irrelevant, Unhelpful, Incorrect, and Successful, based on the degree to which the response addresses the malicious intent of the query. This categorization serves as the basis for FJAR. Then, we introduce a novel harmless tree decomposition approach to construct high-quality anchored references by breaking down the original queries. These references guide the evaluator in determining whether the response genuinely fulfills the original query. Extensive experiments demonstrate that FJAR achieves the highest alignment with human judgment and effectively identifies the root causes of jailbreak failures, providing actionable guidance for improving attack strategies.
Problem

Research questions and friction points this paper is trying to address.

jailbreak evaluation
Large Language Models
automated evaluation
attack success overestimation
fine-grained classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

fine-grained evaluation
jailbreak attack
anchored reference
harmless tree decomposition
LLM safety
🔎 Similar Papers
No similar papers found.
S
Songyang Liu
Key Laboratory of Trustworthy Distributed Computing and Service, Beijing University of Posts and Telecommunications, Beijing, China
Chaozhuo Li
Chaozhuo Li
Microsoft Research Aisa
R
Rui Pu
Key Laboratory of Trustworthy Distributed Computing and Service, Beijing University of Posts and Telecommunications, Beijing, China
Litian Zhang
Litian Zhang
Beihang University
Chenxu Wang
Chenxu Wang
University of Science and Technology of China
Z
Zejian Chen
Key Laboratory of Trustworthy Distributed Computing and Service, Beijing University of Posts and Telecommunications, Beijing, China
Yuting Zhang
Yuting Zhang
HKUST(GZ)
rPPGComputer Vision
Y
Yiming Hei
Artificial Intelligence Research Institute, China Academy of Information and Communications Technology, Beijing, China