Confident Rankings with Fewer Items: Adaptive LLM Evaluation with Continuous Scores

📅 2026-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency of existing large language model (LLM) evaluation methods in generative tasks, where reliable rankings are difficult to obtain with limited samples. The authors propose the first extension of Item Response Theory (IRT) to continuous bounded scoring scenarios, modeling generative task scores via a heteroscedastic normal distribution. They further introduce an uncertainty-aware adaptive ranking and stopping mechanism. Remarkably, using only 2% of evaluation items, their approach achieves 95% high-confidence prediction accuracy across five benchmarks encompassing n-gram, embedding-based, and LLM-as-Judge metrics. The method substantially improves ranking reliability, yielding a 0.12 increase in Kendall’s τ correlation coefficient over random sampling, thereby significantly enhancing both evaluation efficiency and ranking fidelity.

Technology Category

Application Category

📝 Abstract
Computerized Adaptive Testing (CAT) has proven effective for efficient LLM evaluation on multiple-choice benchmarks, but modern LLM evaluation increasingly relies on generation tasks where outputs are scored continuously rather than marked correct/incorrect. We present a principled extension of IRT-based adaptive testing to continuous bounded scores (ROUGE, BLEU, LLM-as-a-Judge) by replacing the Bernoulli response distribution with a heteroskedastic normal distribution. Building on this, we introduce an uncertainty aware ranker with adaptive stopping criteria that achieves reliable model ranking while testing as few items and as cheaply as possible. We validate our method on five benchmarks spanning n-gram-based, embedding-based, and LLM-as-judge metrics. Our method uses 2% of the items while improving ranking correlation by 0.12 {\tau} over random sampling, with 95% accuracy on confident predictions.
Problem

Research questions and friction points this paper is trying to address.

LLM evaluation
continuous scores
adaptive testing
model ranking
efficient evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Computerized Adaptive Testing
Item Response Theory
Continuous Scoring
Uncertainty-aware Ranking
LLM Evaluation
🔎 Similar Papers
No similar papers found.
E
Esma Balkir
Trismik
A
Alice Pernthaller
Trismik
Marco Basaldella
Marco Basaldella
Amazon
J
Jos'e Hern'andez-Orallo
Leverhulme Centre for the Future of Intelligence, University of Cambridge; Universitat Politècnica de València
Nigel Collier
Nigel Collier
Professor of Natural Language Processing, University of Cambridge
Natural language processingKnowledge GraphMulti-modal NLPText MiningGlobal health