AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing reinforcement learning (RL) paradigms for enhancing inference capabilities of medium-scale language models suffer from unclear training protocols, poor data quality, and ad hoc curriculum design. Method: This paper proposes a two-stage, verifiable RL framework—“Math-First → Code Transfer”—integrating test-case-driven reward modeling, progressive response-length curriculum learning, on-policy parameter updates, and high-confidence data filtering to construct a reproducible, verifiable high-quality training pipeline. Results: On AIME 2025, our method improves 7B and 14B models by 14.6% and 17.2%, respectively; on LiveCodeBench, gains are +6.8% and +5.8%. It significantly outperforms state-of-the-art distillation baselines and, for the first time, empirically validates cross-domain generalization—mathematical RL training enhances code reasoning capability, unlocking latent deep-reasoning potential dormant in pretraining and supervised fine-tuning.

Technology Category

Application Category

📝 Abstract
Despite recent progress in large-scale reinforcement learning (RL) for reasoning, the training recipe for building high-performing reasoning models remains elusive. Key implementation details of frontier models, such as DeepSeek-R1, including data curation strategies and RL training recipe, are often omitted. Moreover, recent research indicates distillation remains more effective than RL for smaller models. In this work, we demonstrate that large-scale RL can significantly enhance the reasoning capabilities of strong, small- and mid-sized models, achieving results that surpass those of state-of-the-art distillation-based models. We systematically study the RL training process through extensive ablations and propose a simple yet effective approach: first training on math-only prompts, then on code-only prompts. Notably, we find that math-only RL not only significantly enhances the performance of strong distilled models on math benchmarks (e.g., +14.6% / +17.2% on AIME 2025 for the 7B / 14B models), but also code reasoning tasks (e.g., +6.8% / +5.8% on LiveCodeBench for the 7B / 14B models). In addition, extended code-only RL iterations further improve performance on code benchmarks with minimal or no degradation in math results. We develop a robust data curation pipeline to collect challenging prompts with high-quality, verifiable answers and test cases to enable verification-based RL across both domains. Finally, we identify key experimental insights, including curriculum learning with progressively increasing response lengths and the stabilizing effect of on-policy parameter updates. We find that RL not only elicits the foundational reasoning capabilities acquired during pretraining and supervised fine-tuning (e.g., distillation), but also pushes the limits of the model's reasoning ability, enabling it to solve problems that were previously unsolvable.
Problem

Research questions and friction points this paper is trying to address.

Improving math and code reasoning in small to mid-sized models using RL
Developing effective RL training strategies for reasoning tasks
Enhancing model performance beyond distillation-based approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale RL enhances small-mid model reasoning
Math-only and code-only sequential RL training
Robust data curation with verifiable answers