🤖 AI Summary
Existing LLM reasoning evaluation reports only single-run accuracy, overlooking inherent uncertainty stemming from decoding stochasticity—thus failing to assess performance stability or computational cost controllability.
Method: We propose ReasonBENCH—the first benchmark explicitly quantifying LLM reasoning instability—featuring a modular evaluation framework that integrates multi-round sampling, statistical confidence interval analysis, and standardized chain-of-thought prompting to systematically characterize performance and cost variability across models, tasks, and scales.
Results: Experiments reveal that strategies with identical mean accuracy exhibit confidence interval widths differing by up to 4×; moreover, top-performing methods often incur substantially higher—and more volatile—computational costs. ReasonBENCH uncovers a fundamental trade-off between reasoning stability and efficiency, establishing the first uncertainty-aware evaluation paradigm for LLM reasoning.
📝 Abstract
Large language models (LLMs) are increasingly deployed in settings where reasoning, such as multi-step problem solving and chain-of-thought, is essential. Yet, current evaluation practices overwhelmingly report single-run accuracy while ignoring the intrinsic uncertainty that naturally arises from stochastic decoding. This omission creates a blind spot because practitioners cannot reliably assess whether a method's reported performance is stable, reproducible, or cost-consistent. We introduce ReasonBENCH, the first benchmark designed to quantify the underlying instability in LLM reasoning. ReasonBENCH provides (i) a modular evaluation library that standardizes reasoning frameworks, models, and tasks, (ii) a multi-run protocol that reports statistically reliable metrics for both quality and cost, and (iii) a public leaderboard to encourage variance-aware reporting. Across tasks from different domains, we find that the vast majority of reasoning strategies and models exhibit high instability. Notably, even strategies with similar average performance can display confidence intervals up to four times wider, and the top-performing methods often incur higher and less stable costs. Such instability compromises reproducibility across runs and, consequently, the reliability of reported performance. To better understand these dynamics, we further analyze the impact of prompts, model families, and scale on the trade-off between solve rate and stability. Our results highlight reproducibility as a critical dimension for reliable LLM reasoning and provide a foundation for future reasoning methods and uncertainty quantification techniques. ReasonBENCH is publicly available at https://github.com/au-clan/ReasonBench .