π€ AI Summary
Large language models suffer from computational inefficiency and insufficient accuracy in complex mathematical reasoning. To address this, we propose AgentMathβa novel framework that introduces the first automatic conversion method from natural-language chain-of-thought reasoning to structured tool-execution trajectories. It establishes a dynamic interleaving paradigm of language generation and code execution within an agent-based reinforcement learning (RL) framework. We further design efficient training mechanisms, including request-level asynchronous rollout, partial rollout, and prefix-aware load balancing. Built upon a tool-calling agent architecture, AgentMath integrates supervised fine-tuning (SFT) with agentic RL. Empirical evaluation demonstrates state-of-the-art performance on major mathematical competition benchmarks: AgentMath-30B-A3B achieves 90.6%, 86.4%, and 73.8% accuracy on AIME24, AIME25, and HMMT25, respectively.
π Abstract
Large Reasoning Models (LRMs) like o3 and DeepSeek-R1 have achieved remarkable progress in natural language reasoning with long chain-of-thought. However, they remain computationally inefficient and struggle with accuracy when solving problems requiring complex mathematical operations. In this work, we present AgentMath, an agent framework that seamlessly integrates language models' reasoning capabilities with code interpreters' computational precision to efficiently tackle complex mathematical problems. Our approach introduces three key innovations: (1) An automated method that converts natural language chain-of-thought into structured tool-augmented trajectories, generating high-quality supervised fine-tuning (SFT) data to alleviate data scarcity; (2) A novel agentic reinforcement learning (RL) paradigm that dynamically interleaves natural language generation with real-time code execution. This enables models to autonomously learn optimal tool-use strategies through multi-round interactive feedback, while fostering emergent capabilities in code refinement and error correction; (3) An efficient training system incorporating innovative techniques, including request-level asynchronous rollout scheduling, agentic partial rollout, and prefix-aware weighted load balancing, achieving 4-5x speedup and making efficient RL training feasible on ultra-long sequences with scenarios with massive tool calls.Extensive evaluations show that AgentMath achieves state-of-the-art performance on challenging mathematical competition benchmarks including AIME24, AIME25, and HMMT25. Specifically, AgentMath-30B-A3B attains 90.6%, 86.4%, and 73.8% accuracy respectively, achieving advanced capabilities.These results validate the effectiveness of our approach and pave the way for building more sophisticated and scalable mathematical reasoning agents.