Efficient Inference for Noisy LLM-as-a-Judge Evaluation

📅 2026-01-08
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the systematic, non-random errors inherent in using large language models as automatic evaluators (LLM-as-a-judge), which induce bias in metric estimation. For the first time, it unifies two debiasing approaches—misclassification model correction and prediction residual–driven proxy outcome methods (e.g., PPI)—within a semiparametric efficiency framework. The authors derive an optimal estimator based on the efficient influence function and theoretically demonstrate that PPI-type methods achieve lower asymptotic variance under certain conditions. Both theoretical analysis and empirical experiments confirm that the proposed efficient estimator substantially outperforms existing methods. An open-source implementation is provided to enable unbiased inference and fair comparison in LLM-based evaluation.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly used as automatic evaluators of generative AI outputs, a paradigm often referred to as"LLM-as-a-judge."In practice, LLM judges are imperfect predictions for the underlying truth and can exhibit systematic, non-random errors. Two main approaches have recently been proposed to address this issue: (i) direct measurementerror correction based on misclassification models such as Rogan-Gladen-style estimators, and (ii) surrogate-outcome approaches such as prediction-powered inference (PPI), which correct bias by calibrating prediction residuals on a small set of gold-standard human labels. In this paper, we systematically study the performance of these two approaches for estimating mean parameters (e.g., average benchmark scores or pairwise win rates). Leveraging tools from semiparametric efficiency theory, we unify the two classes of estimators by deriving explicit forms of efficient influence function (EIF)-based efficient estimators and characterize conditions under which PPI-style estimators attain strictly smaller asymptotic variance than measurement-error corrections. We verify our theoretical results in simulations and demonstrate the methods on real-data examples. We provide an implementation of the benchmarked methods and comparison utilities at https://github.com/yiqunchen/debias-llm-as-a-judge.
Problem

Research questions and friction points this paper is trying to address.

LLM-as-a-judge
noisy evaluation
measurement error
bias correction
efficient inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-as-a-Judge
prediction-powered inference
efficient influence function
measurement error correction
semiparametric efficiency
🔎 Similar Papers
No similar papers found.
Y
Yiqun T Chen
Departments of Biostatistics and Computer Science, Johns Hopkins University, Baltimore, MD 21205, USA
Sizhu Lu
Sizhu Lu
PhD student in Statistics, UC Berkeley
causal inference
Sijia Li
Sijia Li
Institute of Information Engineering, Chinese Academy of Sciences
M
Moran Guo
Department of Biostatistics, Johns Hopkins University, Baltimore, MD 21205, USA
S
Shengyi Li
Department of Biostatistics, Johns Hopkins University, Baltimore, MD 21205, USA