ToolMATH: A Math Tool Benchmark for Realistic Long-Horizon Multi-Tool Reasoning

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the reliability of tool-augmented language models in long-horizon reasoning within complex, multi-tool environments by introducing the first mathematical benchmark tailored to this setting, comprising 8,000 problems and 12,000 tools. Through structured tool invocation, multi-step execution tracing, and a carefully designed Hard subset, the study systematically evaluates model performance. It reveals that tool redundancy amplifies early reasoning errors, leading to execution drift, and that while distractor tools can partially compensate for missing capabilities, they often induce erroneous trajectories. The primary failure mode stems from insufficient reasoning capacity, causing error accumulation; moreover, maintaining long-horizon planning consistency and observational discipline proves more critical to performance than optimizing individual action selection.

Technology Category

Application Category

📝 Abstract
We introduce \ToolMATH, a math-grounded benchmark that evaluates tool-augmented language models in realistic multi-tool environments where the output depends on calling schema-specified tools and sustaining multi-step execution. It turns math problems into a controlled, correctness-checkable benchmark with tool sets, enabling systematic evaluation of model reliability under (1) large, overlapping tool catalogs and (2) the absence of the intended capability. \ToolMATH provides actionable diagnostic evidence of failure modes in tool-augmented agents, helping identify the control mechanisms required for robustness. \ToolMATH roughly contains 8k questions and 12k tools; we provide an additional hard-set \ToolMATHHard with questions and tools. Our evaluation reveals that the key failure factor is due to the inability to reason, leading to the accumulation of intermediate results' errors and constrain later decisions. Tool-list redundancy do not simply add noise, but amplify small early deviations into irreversible execution drift. The benchmark highlights that when the intended capability is missing, distractor tools can sometimes serve as partial substitutes in solution paths, yet they can also mislead models into ungrounded tool trajectories. Finally, comparisons between tool-use protocols emphasize that improvements come less from local action selection and more from long-range plan coherence and disciplined use of observations.
Problem

Research questions and friction points this paper is trying to address.

tool-augmented reasoning
multi-tool environments
long-horizon reasoning
mathematical problem solving
execution drift
Innovation

Methods, ideas, or system contributions that make the work stand out.

tool-augmented reasoning
long-horizon planning
multi-tool benchmark
execution drift
mathematical problem solving
🔎 Similar Papers
No similar papers found.
H
Hyeonje Choi
Seoul National University
J
Jeongsoo Lee
Seoul National University
H
Hyojun Lee
Seoul National University
Jay-Yoon Lee
Jay-Yoon Lee
Seoul National University
Machine LearningArtificial IntelligenceKnowledge InjectionStructured prediction