Curing Miracle Steps in LLM Mathematical Reasoning with Rubric Rewards

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) relying solely on outcome-based rewards for mathematical reasoning are prone to reward hacking—producing spurious “miracle steps” or false positives—revealing overreliance on memorized patterns rather than sound logical derivation. Method: We propose a process-oriented Rubric Reward Model (RRM), which constructs fine-grained, human-validated, problem-specific scoring rubrics to define a calibrated reward function over reasoning paths, and integrates it into a reinforcement learning framework for end-to-end supervision of the entire reasoning process. Contribution/Results: RRM substantially suppresses invalid reasoning, reducing erroneous reasoning incidence by 71%. It consistently outperforms outcome-reward baselines across four mathematical benchmarks: on AIME2024, Verified Pass@1024 improves from 26.7% to 62.6%. This demonstrates significant gains in both accuracy and reasoning reliability.

Technology Category

Application Category

📝 Abstract
Large language models for mathematical reasoning are typically trained with outcome-based rewards, which credit only the final answer. In our experiments, we observe that this paradigm is highly susceptible to reward hacking, leading to a substantial overestimation of a model's reasoning ability. This is evidenced by a high incidence of false positives - solutions that reach the correct final answer through an unsound reasoning process. Through a systematic analysis with human verification, we establish a taxonomy of these failure modes, identifying patterns like Miracle Steps - abrupt jumps to a correct output without a valid preceding derivation. Probing experiments suggest a strong association between these Miracle Steps and memorization, where the model appears to recall the answer directly rather than deriving it. To mitigate this systemic issue, we introduce the Rubric Reward Model (RRM), a process-oriented reward function that evaluates the entire reasoning trajectory against problem-specific rubrics. The generative RRM provides fine-grained, calibrated rewards (0-1) that explicitly penalize logical flaws and encourage rigorous deduction. When integrated into a reinforcement learning pipeline, RRM-based training consistently outperforms outcome-only supervision across four math benchmarks. Notably, it boosts Verified Pass@1024 on AIME2024 from 26.7% to 62.6% and reduces the incidence of Miracle Steps by 71%. Our work demonstrates that rewarding the solution process is crucial for building models that are not only more accurate but also more reliable.
Problem

Research questions and friction points this paper is trying to address.

Addresses reward hacking in LLM mathematical reasoning training methods
Mitigates false positives from unsound reasoning processes in models
Introduces process-oriented rewards to improve reliability and accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rubric Reward Model evaluates entire reasoning trajectory
Process-oriented reward penalizes logical flaws explicitly
Fine-grained calibrated rewards replace outcome-only supervision
🔎 Similar Papers
No similar papers found.
Y
Youliang Yuan
School of Data Science, The Chinese University of Hong Kong, Shenzhen, China
Qiuyang Mang
Qiuyang Mang
University of California, Berkeley
Software EngineeringDatabase
Jingbang Chen
Jingbang Chen
University of Waterloo
AlgorithmsData StructureGraph TheoryDatabaseData Mining
H
Hong Wan
Zhejiang University
X
Xiaoyuan Liu
School of Data Science, The Chinese University of Hong Kong, Shenzhen, China
Junjielong Xu
Junjielong Xu
The Chinese University of Hong Kong, Shenzhen
AI4SE
J
Jen-tse Huang
Johns Hopkins University
W
Wenxuan Wang
Renmin University of China
W
Wenxiang Jiao
Xiaohongshu Inc.
Pinjia He
Pinjia He
Assistant Professor, The Chinese University of Hong Kong, Shenzhen
Software EngineeringAI4SESE4AIAIOps