ObjexMT: Objective Extraction and Metacognitive Calibration for LLM-as-a-Judge under Multi-Turn Jailbreaks

📅 2025-08-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of reliably inferring latent adversarial objectives in multi-turn jailbreaking dialogues when using large language models (LLMs) as automated evaluators. To this end, we introduce OBJEX(MT), the first benchmark explicitly designed to assess objective inference capability and metacognitive calibration. Our evaluation framework jointly measures objective extraction and confidence estimation, incorporating human-aligned confidence thresholds (τ* = 0.61) and multidimensional calibration metrics—including Expected Calibration Error (ECE), Brier score, Wrong@High-Conf, and risk-coverage curves. Experiments reveal that Claude-Sonnet-4 achieves the best overall performance (accuracy: 0.515; strongest calibration), whereas GPT-4.1 and Qwen3 exhibit comparable accuracy but severe overconfidence. Attack dataset difficulty varies markedly: MHJ is relatively easy, while Attack_600 and CoSafe pose greater challenges. This study provides the first systematic evidence of LLM evaluators’ misjudgment and miscalibration in high-stakes scenarios, establishing a novel paradigm for trustworthy AI evaluation.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly used as judges of other models, yet it is unclear whether a judge can reliably infer the latent objective of the conversation it evaluates, especially when the goal is distributed across noisy, adversarial, multi-turn jailbreaks. We introduce OBJEX(MT), a benchmark that requires a model to (i) distill a transcript into a single-sentence base objective and (ii) report its own confidence. Accuracy is scored by an LLM judge using semantic similarity between extracted and gold objectives; correctness uses a single human-aligned threshold calibrated once on N=100 items (tau* = 0.61); and metacognition is evaluated with ECE, Brier score, Wrong@High-Conf, and risk-coverage curves. We evaluate gpt-4.1, claude-sonnet-4, and Qwen3-235B-A22B-FP8 on SafeMT Attack_600, SafeMTData_1K, MHJ, and CoSafe. claude-sonnet-4 attains the highest objective-extraction accuracy (0.515) and the best calibration (ECE 0.296; Brier 0.324), while gpt-4.1 and Qwen3 tie at 0.441 accuracy yet show marked overconfidence (mean confidence approx. 0.88 vs. accuracy approx. 0.44; Wrong@0.90 approx. 48-52%). Performance varies sharply across datasets (approx. 0.167-0.865), with MHJ comparatively easy and Attack_600/CoSafe harder. These results indicate that LLM judges often misinfer objectives with high confidence in multi-turn jailbreaks and suggest operational guidance: provide judges with explicit objectives when possible and use selective prediction or abstention to manage risk. We release prompts, scoring templates, and complete logs to facilitate replication and analysis.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM judges' ability to infer latent objectives in multi-turn jailbreaks
Assessing metacognitive calibration and confidence reporting in LLM evaluations
Measuring objective extraction accuracy across adversarial conversation datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Objective extraction via semantic similarity scoring
Metacognitive calibration with human-aligned thresholds
Multi-dataset evaluation framework for jailbreak scenarios
H
Hyunjun Kim
AIM Intelligence
Junwoo Ha
Junwoo Ha
AIM Intelligence
LLM Red-Teaming
S
Sangyoon Yu
AIM Intelligence, Korea Advanced Institute of Science and Technology
Haon Park
Haon Park
Computer Science Student, Seoul National University
Machine LearningDeep LearningImage AugmentationRobotics