🤖 AI Summary
Existing agent benchmarks conflate agent reasoning—specifically the synergistic interplay between tool invocation and logical inference—with extraneous capabilities such as advanced mathematics or factual knowledge, hindering isolated evaluation of core agent reasoning.
Method: We propose GSM-Agent, the first controlled benchmark explicitly targeting *active tool invocation under information scarcity* coupled with *elementary-school-level mathematical reasoning*. To formalize reasoning structure, we introduce Agent Reasoning Graphs (ARGs); analysis reveals a widespread deficiency in models’ ability to revisit reasoning nodes. Building on this insight, we design a tool-augmented test-time scaling method: document embeddings are clustered to construct ARG nodes, and tool calls are mapped to nearest nodes, yielding traceable, loop-closed reasoning paths that integrate retrieval and inference.
Results: Even GPT-5 achieves only 67% accuracy on GSM-Agent, confirming its challenge; our method significantly improves agent reasoning performance across mainstream models.
📝 Abstract
As LLMs are increasingly deployed as agents, agentic reasoning - the ability to combine tool use, especially search, and reasoning - becomes a critical skill. However, it is hard to disentangle agentic reasoning when evaluated in complex environments and tasks. Current agent benchmarks often mix agentic reasoning with challenging math reasoning, expert-level knowledge, and other advanced capabilities. To fill this gap, we build a novel benchmark, GSM-Agent, where an LLM agent is required to solve grade-school-level reasoning problems, but is only presented with the question in the prompt without the premises that contain the necessary information to solve the task, and needs to proactively collect that information using tools. Although the original tasks are grade-school math problems, we observe that even frontier models like GPT-5 only achieve 67% accuracy. To understand and analyze the agentic reasoning patterns, we propose the concept of agentic reasoning graph: cluster the environment's document embeddings into nodes, and map each tool call to its nearest node to build a reasoning path. Surprisingly, we identify that the ability to revisit a previously visited node, widely taken as a crucial pattern in static reasoning, is often missing for agentic reasoning for many models. Based on the insight, we propose a tool-augmented test-time scaling method to improve LLM's agentic reasoning performance by adding tools to encourage models to revisit. We expect our benchmark and the agentic reasoning framework to aid future studies of understanding and pushing the boundaries of agentic reasoning.