AgentOccam: A Simple Yet Strong Baseline for LLM-Based Web Agents

📅 2024-10-17
🏛️ International Conference on Learning Representations
📈 Citations: 18
Influential: 2
📄 PDF
🤖 AI Summary
LLM-based web agents suffer from severe misalignment between observed webpage states and action representations relative to the LLM’s pretraining distribution, resulting in modality mismatch and poor generalization. This paper proposes a lightweight LLM agent framework that systematically identifies and rectifies observation-action misalignment in embodied web interaction for the first time. Specifically, it reconstructs both observation and action representations to align with the LLM’s pretraining distribution via DOM semantic compression, discrete action space quantization, and language-aligned encoding. Critically, the framework eliminates prompt engineering, multi-agent coordination, and online search—enabling zero-shot, end-to-end decision-making. On the WebArena benchmark, our method achieves a 26.6-percentage-point absolute improvement in success rate (+161% relative gain) and surpasses prior state-of-the-art by 9.8 points (+29.4% relative improvement), significantly outperforming contemporaneous approaches without external mechanisms.

Technology Category

Application Category

📝 Abstract
Autonomy via agents using large language models (LLMs) for personalized, standardized tasks boosts human efficiency. Automating web tasks (like booking hotels within a budget) is increasingly sought after. Fulfilling practical needs, the web agent also serves as an important proof-of-concept example for various agent grounding scenarios, with its success promising advancements in many future applications. Prior research often handcrafts web agent strategies (e.g., prompting templates, multi-agent systems, search methods, etc.) and the corresponding in-context examples, which may not generalize well across all real-world scenarios. On the other hand, there has been limited study on the misalignment between a web agent's observation/action representation and the pre-training data of the LLM it's based on. This discrepancy is especially notable when LLMs are primarily trained for language completion rather than tasks involving embodied navigation actions and symbolic web elements. Our study enhances an LLM-based web agent by simply refining its observation and action space to better align with the LLM's capabilities. This approach enables our base agent to significantly outperform previous methods on a wide variety of web tasks. Specifically, on WebArena, a benchmark featuring general-purpose web interaction tasks, our agent AgentOccam surpasses the previous state-of-the-art and concurrent work by 9.8 (+29.4%) and 5.9 (+15.8%) absolute points respectively, and boosts the success rate by 26.6 points (+161%) over similar plain web agents with its observation and action space alignment. We achieve this without using in-context examples, new agent roles, online feedback or search strategies. AgentOccam's simple design highlights LLMs' impressive zero-shot performance on web tasks, and underlines the critical role of carefully tuning observation and action spaces for LLM-based agents.
Problem

Research questions and friction points this paper is trying to address.

Improving LLM-based web agents' observation-action alignment
Enhancing generalization without handcrafted strategies
Boosting zero-shot performance on diverse web tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Refines observation and action space alignment
Enhances LLM-based web agent performance
Achieves zero-shot success without templates
🔎 Similar Papers
No similar papers found.