Evaluating LLMs When They Do Not Know the Answer: Statistical Evaluation of Mathematical Reasoning via Comparative Signals

📅 2026-02-03
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models face limitations in mathematical reasoning evaluation due to small-scale benchmarks and output stochasticity, leading to high variance in accuracy estimates and unstable model rankings. This work proposes the first semi-parametric evaluation framework that integrates ground-truth answers with auxiliary chain-of-thought pairwise comparison signals. By constructing a control variate estimator based on efficient influence functions (EIF), the method substantially reduces estimation variance while preserving asymptotic normality. Empirical results on GPQA Diamond, AIME 2025, and GSM8K demonstrate more precise performance estimation and more reliable model rankings, with particularly pronounced advantages in low-sample and high-noise regimes.

Technology Category

Application Category

📝 Abstract
Evaluating mathematical reasoning in LLMs is constrained by limited benchmark sizes and inherent model stochasticity, yielding high-variance accuracy estimates and unstable rankings across platforms. On difficult problems, an LLM may fail to produce a correct final answer, yet still provide reliable pairwise comparison signals indicating which of two candidate solutions is better. We leverage this observation to design a statistically efficient evaluation framework that combines standard labeled outcomes with pairwise comparison signals obtained by having models judge auxiliary reasoning chains. Treating these comparison signals as control variates, we develop a semiparametric estimator based on the efficient influence function (EIF) for the setting where auxiliary reasoning chains are observed. This yields a one-step estimator that achieves the semiparametric efficiency bound, guarantees strict variance reduction over naive sample averaging, and admits asymptotic normality for principled uncertainty quantification. Across simulations, our one-step estimator substantially improves ranking accuracy, with gains increasing as model output noise grows. Experiments on GPQA Diamond, AIME 2025, and GSM8K further demonstrate more precise performance estimation and more reliable model rankings, especially in small-sample regimes where conventional evaluation is pretty unstable.
Problem

Research questions and friction points this paper is trying to address.

mathematical reasoning
large language models
evaluation
benchmarking
stochasticity
Innovation

Methods, ideas, or system contributions that make the work stand out.

statistical evaluation
pairwise comparison signals
efficient influence function
semiparametric estimation
control variates
🔎 Similar Papers
Z
Zihan Dong
Rutgers University
Z
Zhixian Zhang
Rutgers University
Yang Zhou
Yang Zhou
Ph.D., Rutgers University
Computer VisionMachine Learning
C
Can Jin
Rutgers University
R
Ruijia Wu
Shanghai Jiao Tong University
Linjun Zhang
Linjun Zhang
Associate Professor of Statistics, Rutgers University
High-Dimensional StatisticsDeep LearningDifferential PrivacyAlgorithmic Fairness