🤖 AI Summary
Existing benchmarks struggle to evaluate the step-level process quality and error propagation of tool-using agents in open, dynamic environments. This work proposes the first step-level evaluation benchmark tailored to real-world tool invocation scenarios, comprising 1,000 execution trajectories and 8,509 human annotations with high inter-annotator agreement (IAA: 89.1%). The benchmark introduces ternary action labels—correct, neutral, and incorrect—and formalizes error propagation rules to construct a trajectory-level process evaluation framework. The study reveals that weak policy models exhibit inflated task success rates due to premature termination; current models have difficulty distinguishing neutral from erroneous actions; and incorporating process-level signals significantly enhances test-time generalization, thereby demonstrating their complementary value to outcome-based supervision.
📝 Abstract
While Large Language Models (LLMs) have evolved into tool-using agents, they remain brittle in long-horizon interactions. Unlike mathematical reasoning where errors are often rectifiable via backtracking, tool-use failures frequently induce irreversible side effects, making accurate step-level verification critical. However, existing process-level benchmarks are predominantly confined to closed-world mathematical domains, failing to capture the dynamic and open-ended nature of tool execution. To bridge this gap, we introduce AgentProcessBench, the first benchmark dedicated to evaluating step-level effectiveness in realistic, tool-augmented trajectories. The benchmark comprises 1,000 diverse trajectories and 8,509 human-labeled step annotations with 89.1% inter-annotator agreement. It features a ternary labeling scheme to capture exploration and an error propagation rule to reduce labeling ambiguity. Extensive experiments reveal key insights: (1) weaker policy models exhibit inflated ratios of correct steps due to early termination; (2) distinguishing neutral and erroneous actions remains a significant challenge for current models; and (3) process-derived signals provide complementary value to outcome supervision, significantly enhancing test-time scaling. We hope AgentProcessBench can foster future research in reward models and pave the way toward general agents. The code and data are available at https://github.com/RUCBM/AgentProcessBench.