🤖 AI Summary
Large language models (LLMs) exhibit insufficient tool-use capability and autonomous reflection during mathematical reasoning. Method: We propose an agent-based reinforcement learning (Agentic RL) training framework for a 14B-parameter model, featuring GRPO-RoC—a generalized policy optimization algorithm with correctness-guided resampling—alongside a staged RL training paradigm and a lightweight, high-throughput RL infrastructure enabling stable, low-cost rollouts in noisy Python execution environments. Contribution/Results: Our approach endows the model with a closed-loop cognitive workflow: “think → code → execute → reflect on feedback → verify and correct intermediate steps.” Trained for only 510 steps (under one week) on 64 GPUs, the model achieves 80.6% and 69.8% pass@1 on AIME24 and AIME25, respectively—surpassing a 671B-parameter baseline—while generating shorter responses and demonstrating strong generalization to scientific reasoning and alignment tasks.
📝 Abstract
We introduce rStar2-Agent, a 14B math reasoning model trained with agentic reinforcement learning to achieve frontier-level performance. Beyond current long CoT, the model demonstrates advanced cognitive behaviors, such as thinking carefully before using Python coding tools and reflecting on code execution feedback to autonomously explore, verify, and refine intermediate steps in complex problem-solving. This capability is enabled through three key innovations that makes agentic RL effective at scale: (i) an efficient RL infrastructure with a reliable Python code environment that supports high-throughput execution and mitigates the high rollout costs, enabling training on limited GPU resources (64 MI300X GPUs); (ii) GRPO-RoC, an agentic RL algorithm with a Resample-on-Correct rollout strategy that addresses the inherent environment noises from coding tools, allowing the model to reason more effectively in a code environment; (iii) An efficient agent training recipe that starts with non-reasoning SFT and progresses through multi-RL stages, yielding advanced cognitive abilities with minimal compute cost. To this end, rStar2-Agent boosts a pre-trained 14B model to state of the art in only 510 RL steps within one week, achieving average pass@1 scores of 80.6% on AIME24 and 69.8% on AIME25, surpassing DeepSeek-R1 (671B) with significantly shorter responses. Beyond mathematics, rStar2-Agent-14B also demonstrates strong generalization to alignment, scientific reasoning, and agentic tool-use tasks. Code and training recipes are available at https://github.com/microsoft/rStar.