Beyond the Final Answer: Evaluating the Reasoning Trajectories of Tool-Augmented Agents

📅 2025-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing tool-augmented agent benchmarks overly rely on final-answer matching, failing to characterize critical trajectory-level properties—such as reasoning efficiency, hallucination, and adaptability—throughout multi-step inference. To address this, we propose TRACE, a lightweight, evidence-based framework for multidimensional automatic assessment of reasoning paths without requiring exhaustive human annotation, thereby substantially reducing evaluation cost. Its core innovations include: (i) an LLM-driven evidence accumulation mechanism that distills stepwise justifications; (ii) an interpretable, multi-dimensional scoring system covering correctness, efficiency, faithfulness, and robustness; and (iii) a model-efficient meta-evaluation dataset construction methodology tailored for small language models. Extensive experiments across multiple benchmarks demonstrate TRACE’s strong validity and scalability. For the first time, it systematically uncovers previously unreported behavioral biases and latent deficiencies of state-of-the-art tool-augmented agents in realistic, complex tasks.

Technology Category

Application Category

📝 Abstract
Although recent tool-augmented benchmarks incorporate complex user requests and diverse tools, the evaluation methods for most of them remain limited to answer matching. However, as the number of steps required to resolve a user request increases, a proper evaluation of an agent's performance must go beyond the final answer to also assess the problem-solving trajectory, including previously ignored aspects such as efficiency, hallucination, and adaptivity. The most straightforward method for evaluating these aspects is to compare an agent's trajectory with the ground-truth trajectory, but this approach is fundamentally limited since annotating all valid ground-truth trajectories is prohibitively expensive. However, a simple LLM-based evaluator struggles to assess trajectories in detail without ground truth. To effectively evaluate the agents in this manner, we introduce TRACE, a framework for the multi-dimensional evaluation of tool-augmented LLM agent performance. By incorporating an evidence bank, which accumulates knowledge gathered from preceding reasoning steps, TRACE enables a multi-faceted analysis and evaluation of an agent's reasoning trajectory effectively. To validate our framework, we develop a new meta-evaluation dataset by augmenting existing benchmarks with diverse and flawed trajectories, each labeled with multi-faceted performance scores. Our results confirm that TRACE accurately evaluates these complex behaviors in a scalable and cost-effective manner, even with small open-source LLMs. Furthermore, we apply our method to evaluate the trajectories that agents produce while solving tool-augmented tasks, presenting previously unreported observations and their corresponding insights.
Problem

Research questions and friction points this paper is trying to address.

Evaluating reasoning trajectories beyond final answer correctness
Assessing efficiency, hallucination, and adaptivity in problem-solving paths
Developing scalable evaluation without expensive ground-truth annotation
Innovation

Methods, ideas, or system contributions that make the work stand out.

TRACE framework evaluates reasoning trajectories multi-dimensionally
Evidence bank accumulates knowledge from prior reasoning steps
Enables scalable evaluation without ground-truth trajectory annotation
🔎 Similar Papers
No similar papers found.