A Sober Look at Progress in Language Model Reasoning: Pitfalls and Paths to Reproducibility

📅 2025-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current mathematical reasoning evaluation suffers from opaque assessment protocols, poor reproducibility, and weak statistical foundations; performance gains are frequently confounded by decoding hyperparameters, random seeds, prompt formatting, and hardware/software variations. To address these issues, we propose the first standardized evaluation framework specifically designed for language model reasoning—incorporating multi-dimensional controlled experiments, statistical significance testing, and cross-framework/hardware consistency validation to systematically quantify sources of evaluation volatility. Our empirical analysis reveals, for the first time, substantial overfitting of reinforcement learning (RL) methods on small-scale benchmarks such as AIME24, with an actual gain of only +1.2%; in contrast, supervised fine-tuning (SFT) demonstrates superior generalization and robustness. We fully open-source all code, prompts, and outputs, establishing best practices and reporting standards for reproducible evaluation—thereby advancing mathematical reasoning assessment toward scientific rigor and standardization.

Technology Category

Application Category

📝 Abstract
Reasoning has emerged as the next major frontier for language models (LMs), with rapid advances from both academic and industrial labs. However, this progress often outpaces methodological rigor, with many evaluations relying on benchmarking practices that lack transparency, robustness, or statistical grounding. In this work, we conduct a comprehensive empirical study and find that current mathematical reasoning benchmarks are highly sensitive to subtle implementation choices - including decoding parameters, random seeds, prompt formatting, and even hardware and software-framework configurations. Performance gains reported in recent studies frequently hinge on unclear comparisons or unreported sources of variance. To address these issues, we propose a standardized evaluation framework with clearly defined best practices and reporting standards. Using this framework, we reassess recent methods and find that reinforcement learning (RL) approaches yield only modest improvements - far below prior claims - and are prone to overfitting, especially on small-scale benchmarks like AIME24. In contrast, supervised finetuning (SFT) methods show consistently stronger generalization. To foster reproducibility, we release all code, prompts, and model outputs, for reasoning benchmarks, establishing more rigorous foundations for future work.
Problem

Research questions and friction points this paper is trying to address.

Current LM reasoning benchmarks lack transparency and robustness
Mathematical reasoning benchmarks are sensitive to implementation choices
Reinforcement learning methods show modest improvements and overfitting issues
Innovation

Methods, ideas, or system contributions that make the work stand out.

Standardized evaluation framework for reproducibility
Reassessed RL and SFT methods rigorously
Released all code and data publicly
🔎 Similar Papers
No similar papers found.