How to Correctly Report LLM-as-a-Judge Evaluations

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) used as evaluators suffer from estimation bias and noise due to insufficient specificity and sensitivity, undermining the statistical reliability of automated evaluation. Method: We propose the first practical framework that corrects such bias and constructs statistically rigorous confidence intervals. It introduces a plug-in bias-correction mechanism that jointly models uncertainty over both test and calibration sets, coupled with an adaptive sampling algorithm to optimize calibration sample allocation. Leveraging estimated specificity and sensitivity, the framework employs statistical inference to derive bias-corrected confidence intervals. Contribution/Results: Our approach significantly reduces both bias and variance in accuracy estimation. Extensive evaluation across multiple benchmarks demonstrates its robustness, reliability, and generalizability. The framework establishes a new paradigm for LLM-based automated evaluation—reproducible, interpretable, and statistically trustworthy.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly used as evaluators in lieu of humans. While scalable, their judgments are noisy due to imperfect specificity and sensitivity of LLMs, leading to biased accuracy estimates. Although bias-correction methods exist, they are underutilized in LLM research and typically assume exact knowledge of the model's specificity and sensitivity. Furthermore, in general we only have estimates of these values and it is not well known how to properly construct confidence intervals using only estimates. This work presents a simple plug-in framework that corrects such bias and constructs confidence intervals reflecting uncertainty from both test and calibration dataset, enabling practical and statistically sound LLM-based evaluation. Additionally, to reduce uncertainty in the accuracy estimate, we introduce an adaptive algorithm that efficiently allocates calibration sample sizes.
Problem

Research questions and friction points this paper is trying to address.

Correcting biased accuracy estimates in LLM-as-a-judge evaluations
Constructing confidence intervals with imperfect specificity and sensitivity estimates
Reducing uncertainty through adaptive calibration sample allocation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bias correction using plug-in framework
Confidence intervals incorporating dataset uncertainty
Adaptive algorithm optimizing calibration sample sizes
🔎 Similar Papers
No similar papers found.