Who can we trust? LLM-as-a-jury for Comparative Assessment

📅 2026-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inconsistency of large language models (LLMs) when used as automatic evaluators in pairwise text comparisons, which stems from varying reliability and probabilistic biases, compounded by the absence of human annotations for calibration. The authors propose BT-sigma, an extension of the Bradley–Terry model within the LLM-as-a-jury framework, which—under fully unsupervised conditions—explicitly models each LLM judge’s discriminative ability. Using only pairwise comparison data, BT-sigma jointly infers item rankings and judge reliability without external supervision. Experiments across multiple natural language generation benchmarks demonstrate that BT-sigma significantly outperforms simple averaging aggregation methods. Moreover, the learned discriminative parameters exhibit strong correlation with the cycle consistency of LLM judgments, effectively enabling adaptive self-calibration of evaluator reliability.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly applied as automatic evaluators for natural language generation assessment often using pairwise comparative judgements. Existing approaches typically rely on single judges or aggregate multiple judges assuming equal reliability. In practice, LLM judges vary substantially in performance across tasks and aspects, and their judgment probabilities may be biased and inconsistent. Furthermore, human-labelled supervision for judge calibration may be unavailable. We first empirically demonstrate that inconsistencies in LLM comparison probabilities exist and show that it limits the effectiveness of direct probability-based ranking. To address this, we study the LLM-as-a-jury setting and propose BT-sigma, a judge-aware extension of the Bradley-Terry model that introduces a discriminator parameter for each judge to jointly infer item rankings and judge reliability from pairwise comparisons alone. Experiments on benchmark NLG evaluation datasets show that BT-sigma consistently outperforms averaging-based aggregation methods, and that the learned discriminator strongly correlates with independent measures of the cycle consistency of LLM judgments. Further analysis reveals that BT-sigma can be interpreted as an unsupervised calibration mechanism that improves aggregation by modelling judge reliability.
Problem

Research questions and friction points this paper is trying to address.

LLM-as-a-jury
comparative assessment
judge reliability
bias and inconsistency
unsupervised calibration
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-as-a-jury
Bradley-Terry model
judge reliability
unsupervised calibration
pairwise comparison
🔎 Similar Papers
No similar papers found.
Mengjie Qian
Mengjie Qian
University of Cambridge
speech recognitionmachine learningspoken language assessmentlow-resource
Guangzhi Sun
Guangzhi Sun
University of Cambridge
Speech and language technologyconversational AI
M
Mark J. F. Gales
Department of Engineering, University of Cambridge, UK
K
Kate M. Knill
Department of Engineering, University of Cambridge, UK