Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning

📅 2025-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the performance limits of reinforcement learning (RL) for mathematical reasoning under binary outcome rewards (correct/incorrect). To address training instability caused by sparse feedback, we propose OREAL: (1) a theoretically grounded framework proving that behavior cloning on optimal positive trajectories converges to the KL-regularized optimal policy; (2) a negative-sample reward shaping mechanism ensuring gradient consistency; and (3) a token-level reward model to mitigate reward sparsity in long chain-of-thought sequences. On the MATH-500 benchmark, our 7B model achieves 94.0% pass@1—matching the performance of a 32B baseline—while the 32B model attains 95.0% pass@1, significantly outperforming state-of-the-art distillation-based methods. The core contributions lie in a theory-driven approach to sparse-reward modeling and an efficient, principled paradigm for policy optimization in math reasoning.

Technology Category

Application Category

📝 Abstract
Reasoning abilities, especially those for solving complex math problems, are crucial components of general intelligence. Recent advances by proprietary companies, such as o-series models of OpenAI, have made remarkable progress on reasoning tasks. However, the complete technical details remain unrevealed, and the techniques that are believed certainly to be adopted are only reinforcement learning (RL) and the long chain of thoughts. This paper proposes a new RL framework, termed OREAL, to pursue the performance limit that can be achieved through extbf{O}utcome extbf{RE}w extbf{A}rd-based reinforcement extbf{L}earning for mathematical reasoning tasks, where only binary outcome rewards are easily accessible. We theoretically prove that behavior cloning on positive trajectories from best-of-N (BoN) sampling is sufficient to learn the KL-regularized optimal policy in binary feedback environments. This formulation further implies that the rewards of negative samples should be reshaped to ensure the gradient consistency between positive and negative samples. To alleviate the long-existing difficulties brought by sparse rewards in RL, which are even exacerbated by the partial correctness of the long chain of thought for reasoning tasks, we further apply a token-level reward model to sample important tokens in reasoning trajectories for learning. With OREAL, for the first time, a 7B model can obtain 94.0 pass@1 accuracy on MATH-500 through RL, being on par with 32B models. OREAL-32B also surpasses previous 32B models trained by distillation with 95.0 pass@1 accuracy on MATH-500. Our investigation also indicates the importance of initial policy models and training queries for RL. Code, models, and data will be released to benefit future researchfootnote{https://github.com/InternLM/OREAL}.
Problem

Research questions and friction points this paper is trying to address.

Develops OREAL framework for mathematical reasoning tasks.
Addresses sparse rewards in reinforcement learning for math.
Enhances model accuracy on complex math problems.
Innovation

Methods, ideas, or system contributions that make the work stand out.

OREAL framework enhances RL for math reasoning.
Token-level reward model optimizes sparse rewards.
Best-of-N sampling improves policy learning efficiency.
🔎 Similar Papers
No similar papers found.
Chengqi Lyu
Chengqi Lyu
Shanghai AI Laboratory
S
Songyang Gao
Shanghai AI Laboratory
Yuzhe Gu
Yuzhe Gu
Shanghai Jiao Tong University
Large Language ModelScalable OversightKnowledge and Reasoning
Wenwei Zhang
Wenwei Zhang
Shanghai AI Laboratory
Large Language ModelScalable OversightArtificial Intelligence
J
Jianfei Gao
Shanghai AI Laboratory
Kuikun Liu
Kuikun Liu
Shanghai AI Laboratory
Z
Ziyi Wang
Shanghai AI Laboratory
S
Shuaibin Li
Shanghai AI Laboratory
Q
Qian Zhao
Shanghai AI Laboratory
H
Haian Huang
Shanghai AI Laboratory
W
Weihan Cao
Shanghai AI Laboratory
J
Jiangning Liu
Shanghai AI Laboratory
H
Hongwei Liu
Shanghai AI Laboratory
J
Junnan Liu
Shanghai AI Laboratory
S
Songyang Zhang
Shanghai AI Laboratory
Dahua Lin
Dahua Lin
The Chinese University of Hong Kong
computer visionmachine learningprobabilistic inferencebayesian nonparametrics
K
Kai Chen
Shanghai AI Laboratory