AgentMath: Empowering Mathematical Reasoning for Large Language Models via Tool-Augmented Agent

πŸ“… 2025-12-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Large language models suffer from computational inefficiency and insufficient accuracy in complex mathematical reasoning. To address this, we propose AgentMathβ€”a novel framework that introduces the first automatic conversion method from natural-language chain-of-thought reasoning to structured tool-execution trajectories. It establishes a dynamic interleaving paradigm of language generation and code execution within an agent-based reinforcement learning (RL) framework. We further design efficient training mechanisms, including request-level asynchronous rollout, partial rollout, and prefix-aware load balancing. Built upon a tool-calling agent architecture, AgentMath integrates supervised fine-tuning (SFT) with agentic RL. Empirical evaluation demonstrates state-of-the-art performance on major mathematical competition benchmarks: AgentMath-30B-A3B achieves 90.6%, 86.4%, and 73.8% accuracy on AIME24, AIME25, and HMMT25, respectively.

Technology Category

Application Category

πŸ“ Abstract
Large Reasoning Models (LRMs) like o3 and DeepSeek-R1 have achieved remarkable progress in natural language reasoning with long chain-of-thought. However, they remain computationally inefficient and struggle with accuracy when solving problems requiring complex mathematical operations. In this work, we present AgentMath, an agent framework that seamlessly integrates language models' reasoning capabilities with code interpreters' computational precision to efficiently tackle complex mathematical problems. Our approach introduces three key innovations: (1) An automated method that converts natural language chain-of-thought into structured tool-augmented trajectories, generating high-quality supervised fine-tuning (SFT) data to alleviate data scarcity; (2) A novel agentic reinforcement learning (RL) paradigm that dynamically interleaves natural language generation with real-time code execution. This enables models to autonomously learn optimal tool-use strategies through multi-round interactive feedback, while fostering emergent capabilities in code refinement and error correction; (3) An efficient training system incorporating innovative techniques, including request-level asynchronous rollout scheduling, agentic partial rollout, and prefix-aware weighted load balancing, achieving 4-5x speedup and making efficient RL training feasible on ultra-long sequences with scenarios with massive tool calls.Extensive evaluations show that AgentMath achieves state-of-the-art performance on challenging mathematical competition benchmarks including AIME24, AIME25, and HMMT25. Specifically, AgentMath-30B-A3B attains 90.6%, 86.4%, and 73.8% accuracy respectively, achieving advanced capabilities.These results validate the effectiveness of our approach and pave the way for building more sophisticated and scalable mathematical reasoning agents.
Problem

Research questions and friction points this paper is trying to address.

Enhances mathematical reasoning in large language models via tool integration
Addresses computational inefficiency and accuracy in complex mathematical operations
Generates high-quality training data and optimizes tool-use strategies through reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates language models with code interpreters for precision
Uses agentic reinforcement learning for dynamic tool-use strategies
Implements efficient training system with asynchronous rollout scheduling
πŸ”Ž Similar Papers
No similar papers found.
H
Haipeng Luo
Shenzhen International Graduate School, Tsinghua University
Huawen Feng
Huawen Feng
South China University of Technology, Alibaba Tongyi Lab, Microsoft Research Asia, Tencent Hunyuan X
NLPLarge Language ModelsPost TrainingReinforcement LearningPreference Optimization
Qingfeng Sun
Qingfeng Sun
Tencent Hunyuan X
Natural Language Processing
C
Can Xu
Tencent Hunyuan
K
Kai Zheng
Tencent Hunyuan
Y
Yufei Wang
Tencent Hunyuan
T
Tao Yang
Tencent Hunyuan
H
Han Hu
Tencent Hunyuan
Y
Yansong Tang
Shenzhen International Graduate School, Tsinghua University
D
Di Wang
Tencent Hunyuan