AceReason-Nemotron 1.1: Advancing Math and Code Reasoning through SFT and RL Synergy

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the synergistic mechanisms by which supervised fine-tuning (SFT) and reinforcement learning (RL) jointly enhance mathematical and code reasoning capabilities. Methodologically, we propose dual data augmentation—extending both prompts and responses—and a temperature-scheduling strategy with entropy regularization (target entropy ≈ 0.3) during RL sampling to dynamically balance exploration and exploitation. We further provide the first systematic empirical validation that SFT intensity exhibits a strong positive correlation with final RL performance. Experiments demonstrate that AceReason-Nemotron-1.1 7B—a Qwen2.5-7B-based model—achieves state-of-the-art results across multiple mathematical and programming benchmarks, substantially narrowing the performance gap from its initialization. All models, datasets, and training code are publicly released.

Technology Category

Application Category

📝 Abstract
In this work, we investigate the synergy between supervised fine-tuning (SFT) and reinforcement learning (RL) in developing strong reasoning models. We begin by curating the SFT training data through two scaling strategies: increasing the number of collected prompts and the number of generated responses per prompt. Both approaches yield notable improvements in reasoning performance, with scaling the number of prompts resulting in more substantial gains. We then explore the following questions regarding the synergy between SFT and RL: (i) Does a stronger SFT model consistently lead to better final performance after large-scale RL training? (ii) How can we determine an appropriate sampling temperature during RL training to effectively balance exploration and exploitation for a given SFT initialization? Our findings suggest that (i) holds true, provided effective RL training is conducted, particularly when the sampling temperature is carefully chosen to maintain the temperature-adjusted entropy around 0.3, a setting that strikes a good balance between exploration and exploitation. Notably, the performance gap between initial SFT models narrows significantly throughout the RL process. Leveraging a strong SFT foundation and insights into the synergistic interplay between SFT and RL, our AceReason-Nemotron-1.1 7B model significantly outperforms AceReason-Nemotron-1.0 and achieves new state-of-the-art performance among Qwen2.5-7B-based reasoning models on challenging math and code benchmarks, thereby demonstrating the effectiveness of our post-training recipe. We release the model and data at: https://huggingface.co/nvidia/AceReason-Nemotron-1.1-7B
Problem

Research questions and friction points this paper is trying to address.

Enhancing math and code reasoning via SFT and RL synergy
Optimizing SFT data scaling for improved reasoning performance
Balancing exploration and exploitation in RL training temperature
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synergy of SFT and RL for reasoning models
Scaling prompts and responses boosts performance
Optimal sampling temperature balances exploration-exploitation