🤖 AI Summary
This work addresses the inefficiency of existing large language model (LLM) evaluation methods in generative tasks, where reliable rankings are difficult to obtain with limited samples. The authors propose the first extension of Item Response Theory (IRT) to continuous bounded scoring scenarios, modeling generative task scores via a heteroscedastic normal distribution. They further introduce an uncertainty-aware adaptive ranking and stopping mechanism. Remarkably, using only 2% of evaluation items, their approach achieves 95% high-confidence prediction accuracy across five benchmarks encompassing n-gram, embedding-based, and LLM-as-Judge metrics. The method substantially improves ranking reliability, yielding a 0.12 increase in Kendall’s τ correlation coefficient over random sampling, thereby significantly enhancing both evaluation efficiency and ranking fidelity.
📝 Abstract
Computerized Adaptive Testing (CAT) has proven effective for efficient LLM evaluation on multiple-choice benchmarks, but modern LLM evaluation increasingly relies on generation tasks where outputs are scored continuously rather than marked correct/incorrect. We present a principled extension of IRT-based adaptive testing to continuous bounded scores (ROUGE, BLEU, LLM-as-a-Judge) by replacing the Bernoulli response distribution with a heteroskedastic normal distribution. Building on this, we introduce an uncertainty aware ranker with adaptive stopping criteria that achieves reliable model ranking while testing as few items and as cheaply as possible. We validate our method on five benchmarks spanning n-gram-based, embedding-based, and LLM-as-judge metrics. Our method uses 2% of the items while improving ranking correlation by 0.12 {\tau} over random sampling, with 95% accuracy on confident predictions.