Does Context Matter? ContextualJudgeBench for Evaluating LLM-based Judges in Contextual Settings

📅 2025-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-based evaluators are predominantly assessed on context-agnostic instruction-following tasks, neglecting realistic context-dependent scenarios such as RAG and summarization, and lack the capability to model conditional evaluation criteria (e.g., verifying factual accuracy *before* assessing completeness). Method: We introduce ContextualJudgeBench—the first benchmark dedicated to context-aware evaluation—comprising eight realistic use cases and 2,000 response pairs. It pioneers a conditional evaluation paradigm and a multi-source hybrid construction pipeline integrating human annotation and model-induced perturbations. Contribution/Results: We establish a unified consistency evaluation framework across 11 specialized evaluator models and 9 general-purpose LLMs, introducing conditional accuracy and inter-model consistency metrics. Experiments reveal that even the state-of-the-art model (OpenAI o1) achieves only 55% consistent accuracy on this benchmark—substantially lower than its performance on context-agnostic tasks—demonstrating that context awareness constitutes a critical bottleneck in current evaluator capabilities.

Technology Category

Application Category

📝 Abstract
The large language model (LLM)-as-judge paradigm has been used to meet the demand for a cheap, reliable, and fast evaluation of model outputs during AI system development and post-deployment monitoring. While judge models -- LLMs finetuned to specialize in assessing and critiquing model outputs -- have been touted as general purpose evaluators, they are typically evaluated only on non-contextual scenarios, such as instruction following. The omission of contextual settings -- those where external information is used as context to generate an output -- is surprising given the increasing prevalence of retrieval-augmented generation (RAG) and summarization use cases. Contextual assessment is uniquely challenging, as evaluation often depends on practitioner priorities, leading to conditional evaluation criteria (e.g., comparing responses based on factuality and then considering completeness if they are equally factual). To address the gap, we propose ContextualJudgeBench, a judge benchmark with 2,000 challenging response pairs across eight splits inspired by real-world contextual evaluation scenarios. We build our benchmark with a multi-pronged data construction pipeline that leverages both existing human annotations and model-based perturbations. Our comprehensive study across 11 judge models and 9 general purpose models, reveals that the contextual information and its assessment criteria present a significant challenge to even state-of-the-art models. For example, OpenAI's o1, the best-performing model, barely reaches 55% consistent accuracy.
Problem

Research questions and friction points this paper is trying to address.

Evaluates LLM-based judges in contextual settings.
Addresses lack of contextual evaluation in AI systems.
Proposes ContextualJudgeBench for real-world assessment scenarios.
Innovation

Methods, ideas, or system contributions that make the work stand out.

ContextualJudgeBench for contextual LLM evaluation
Multi-pronged data construction pipeline
Evaluation of 11 judge and 9 general models
🔎 Similar Papers
No similar papers found.