🤖 AI Summary
This paper addresses context-dependent bias in preference scores when evaluating large language models (LLMs) across diverse domains. To mitigate this bias, we propose an automatic and efficient debiased preference inference framework. Methodologically, we design a semiparametric estimator that integrates a contextual Bradley–Terry–Luce model with weighted residual balancing. We innovatively introduce a Fisher random walk strategy to derive optimal weights and employ potential functions to model auxiliary weighting, enabling flexible integration with deep learning architectures. Furthermore, we combine Gaussian multiplier bootstrap-based multiple testing correction with cross-fitted importance sampling to alleviate distributional shift. Experiments across multiple domains demonstrate that our approach significantly improves accuracy and robustness in pairwise LLM evaluation tasks, effectively suppresses contextual bias, and exhibits strong generalizability and practical utility.
📝 Abstract
Motivated by the need for rigorous and scalable evaluation of large language models, we study contextual preference inference for pairwise comparison functionals of context-dependent preference score functions across domains. Focusing on the contextual Bradley-Terry-Luce model, we develop a semiparametric efficient estimator that automates the debiased estimation through aggregating weighted residual balancing terms across the comparison graph. We show that the efficiency is achieved when the weights are derived from a novel strategy called Fisher random walk. We also propose a computationally feasible method to compute the weights by a potential representation of nuisance weight functions. We show our inference procedure is valid for general score function estimators accommodating the practitioners' need to implement flexible deep learning methods. We extend the procedure to multiple hypothesis testing using a Gaussian multiplier bootstrap that controls familywise error and to distributional shift via a cross-fitted importance-sampling adjustment for target-domain inference. Numerical studies, including language model evaluations under diverse contexts, corroborate the accuracy, efficiency, and practical utility of our method.