AI Planning Framework for LLM-Based Web Agents

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of interpretable planning mechanisms in existing large language model (LLM)-driven web agents, which hinders failure diagnosis and comprehensive evaluation of task execution quality. The authors model web tasks as sequential decision-making processes and establish, for the first time, a formal mapping framework between LLM-based agents and classical search algorithms—namely breadth-first search (BFS), depth-first search (DFS), and best-first search. They further propose a five-dimensional trajectory quality assessment framework that goes beyond conventional success-rate metrics. Evaluations on a newly released dataset of 794 human-annotated trajectories demonstrate that Step-by-Step agents better align with human behavior (achieving 38% task success rate), while Full-Plan-in-Advance agents attain 89% element-level accuracy, confirming the framework’s utility in guiding agent architecture selection.

Technology Category

Application Category

📝 Abstract
Developing autonomous agents for web-based tasks is a core challenge in AI. While Large Language Model (LLM) agents can interpret complex user requests, they often operate as black boxes, making it difficult to diagnose why they fail or how they plan. This paper addresses this gap by formally treating web tasks as sequential decision-making processes. We introduce a taxonomy that maps modern agent architectures to traditional planning paradigms: Step-by-Step agents to Breadth-First Search (BFS), Tree Search agents to Best-First Tree Search, and Full-Plan-in-Advance agents to Depth-First Search (DFS). This framework allows for a principled diagnosis of system failures like context drift and incoherent task decomposition. To evaluate these behaviors, we propose five novel evaluation metrics that assess trajectory quality beyond simple success rates. We support this analysis with a new dataset of 794 human-labeled trajectories from the WebArena benchmark. Finally, we validate our evaluation framework by comparing a baseline Step-by-Step agent against a novel Full-Plan-in-Advance implementation. Our results reveal that while the Step-by-Step agent aligns more closely with human gold trajectories (38% overall success), the Full-Plan-in-Advance agent excels in technical measures such as element accuracy (89%), demonstrating the necessity of our proposed metrics for selecting appropriate agent architectures based on specific application constraints.
Problem

Research questions and friction points this paper is trying to address.

LLM-based web agents
AI planning
agent interpretability
task decomposition
trajectory evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI Planning Framework
LLM-based Web Agents
Sequential Decision-Making
Agent Architecture Taxonomy
Trajectory Evaluation Metrics
🔎 Similar Papers
2023-08-22Frontiers Comput. Sci.Citations: 866
O
Orit Shahnovsky
Faculty of Computer and Information Science, University of Haifa, Israel
Rotem Dror
Rotem Dror
University of Haifa
Machine LearningOptimizationNatural Language Processing