A Judge-Aware Ranking Framework for Evaluating Large Language Models without Ground Truth

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of evaluating large language models (LLMs) in open-ended tasks without ground-truth labels, where existing methods neglect the varying reliability among judge LLMs, leading to biased rankings and inaccurate uncertainty estimates. The authors propose a judge-reliability-aware ranking framework that extends the Bradley–Terry–Luce model by introducing judge-specific discrimination parameters to jointly estimate both the abilities of evaluated models and the reliability of judges. This is the first approach to explicitly model judge reliability within a reference-free evaluation paradigm, establishing identifiability theory and proving consistency and asymptotic normality of the maximum likelihood estimator—enabling statistically sound confidence intervals for rank differences. Experiments demonstrate that the method significantly improves alignment with human preferences across multiple benchmarks and a newly collected dataset, outperforms unweighted baselines in data efficiency, and yields well-calibrated uncertainty quantification.

Technology Category

Application Category

📝 Abstract
Evaluating large language models (LLMs) on open-ended tasks without ground-truth labels is increasingly done via the LLM-as-a-judge paradigm. A critical but under-modeled issue is that judge LLMs differ substantially in reliability; treating all judges equally can yield biased leaderboards and misleading uncertainty estimates. More data can make evaluation more confidently wrong under misspecified aggregation. We propose a judge-aware ranking framework that extends the Bradley-Terry-Luce model by introducing judge-specific discrimination parameters, jointly estimating latent model quality and judge reliability from pairwise comparisons without reference labels. We establish identifiability up to natural normalizations and prove consistency and asymptotic normality of the maximum likelihood estimator, enabling confidence intervals for score differences and rank comparisons. Across multiple public benchmarks and a newly collected dataset, our method improves agreement with human preferences, achieves higher data efficiency than unweighted baselines, and produces calibrated uncertainty quantification for LLM rankings.
Problem

Research questions and friction points this paper is trying to address.

LLM evaluation
LLM-as-a-judge
judge reliability
ground-truth-free evaluation
ranking bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

judge-aware ranking
Bradley-Terry-Luce model
LLM evaluation
uncertainty quantification
pairwise comparison
M
Mingyuan Xu
Department of Statistics and Data Science, National University of Singapore
X
Xinzi Tan
Department of Statistics and Data Science, National University of Singapore
Jiawei Wu
Jiawei Wu
National University of Singapore
Natural Language ProcessingVision and LanguageLarge Language Models
Doudou Zhou
Doudou Zhou
National University of Singapore
High-dimensional StatisticsEHR Data AnalysisChange-point DetectionTransfer Learning