🤖 AI Summary
Current large language models face limitations in mathematical reasoning evaluation due to small-scale benchmarks and output stochasticity, leading to high variance in accuracy estimates and unstable model rankings. This work proposes the first semi-parametric evaluation framework that integrates ground-truth answers with auxiliary chain-of-thought pairwise comparison signals. By constructing a control variate estimator based on efficient influence functions (EIF), the method substantially reduces estimation variance while preserving asymptotic normality. Empirical results on GPQA Diamond, AIME 2025, and GSM8K demonstrate more precise performance estimation and more reliable model rankings, with particularly pronounced advantages in low-sample and high-noise regimes.
📝 Abstract
Evaluating mathematical reasoning in LLMs is constrained by limited benchmark sizes and inherent model stochasticity, yielding high-variance accuracy estimates and unstable rankings across platforms. On difficult problems, an LLM may fail to produce a correct final answer, yet still provide reliable pairwise comparison signals indicating which of two candidate solutions is better. We leverage this observation to design a statistically efficient evaluation framework that combines standard labeled outcomes with pairwise comparison signals obtained by having models judge auxiliary reasoning chains. Treating these comparison signals as control variates, we develop a semiparametric estimator based on the efficient influence function (EIF) for the setting where auxiliary reasoning chains are observed. This yields a one-step estimator that achieves the semiparametric efficiency bound, guarantees strict variance reduction over naive sample averaging, and admits asymptotic normality for principled uncertainty quantification. Across simulations, our one-step estimator substantially improves ranking accuracy, with gains increasing as model output noise grows. Experiments on GPQA Diamond, AIME 2025, and GSM8K further demonstrate more precise performance estimation and more reliable model rankings, especially in small-sample regimes where conventional evaluation is pretty unstable.