🤖 AI Summary
This study addresses the lack of interpretable planning mechanisms in existing large language model (LLM)-driven web agents, which hinders failure diagnosis and comprehensive evaluation of task execution quality. The authors model web tasks as sequential decision-making processes and establish, for the first time, a formal mapping framework between LLM-based agents and classical search algorithms—namely breadth-first search (BFS), depth-first search (DFS), and best-first search. They further propose a five-dimensional trajectory quality assessment framework that goes beyond conventional success-rate metrics. Evaluations on a newly released dataset of 794 human-annotated trajectories demonstrate that Step-by-Step agents better align with human behavior (achieving 38% task success rate), while Full-Plan-in-Advance agents attain 89% element-level accuracy, confirming the framework’s utility in guiding agent architecture selection.
📝 Abstract
Developing autonomous agents for web-based tasks is a core challenge in AI. While Large Language Model (LLM) agents can interpret complex user requests, they often operate as black boxes, making it difficult to diagnose why they fail or how they plan. This paper addresses this gap by formally treating web tasks as sequential decision-making processes. We introduce a taxonomy that maps modern agent architectures to traditional planning paradigms: Step-by-Step agents to Breadth-First Search (BFS), Tree Search agents to Best-First Tree Search, and Full-Plan-in-Advance agents to Depth-First Search (DFS). This framework allows for a principled diagnosis of system failures like context drift and incoherent task decomposition. To evaluate these behaviors, we propose five novel evaluation metrics that assess trajectory quality beyond simple success rates. We support this analysis with a new dataset of 794 human-labeled trajectories from the WebArena benchmark. Finally, we validate our evaluation framework by comparing a baseline Step-by-Step agent against a novel Full-Plan-in-Advance implementation. Our results reveal that while the Step-by-Step agent aligns more closely with human gold trajectories (38% overall success), the Full-Plan-in-Advance agent excels in technical measures such as element accuracy (89%), demonstrating the necessity of our proposed metrics for selecting appropriate agent architectures based on specific application constraints.