Read More, Think More: Revisiting Observation Reduction for Web Agents

πŸ“… 2026-04-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study investigates the optimal representation of web page observations for large language model (LLM)-driven web agents, aiming to balance information richness against the model’s processing constraints. Through systematic evaluation of representations such as raw HTML and accessibility trees, the authors find that high-capacity models significantly benefit from full HTML inputs when sufficient reasoning tokens are available. To address this dependency, they propose a differential observation representation that incorporates historical context and demonstrate that the effectiveness of observation representations is highly contingent on both model capability and token budget. Building on these insights, they further introduce an adaptive selection strategy coupled with a token-efficient differential history mechanism, which consistently improves task success rates across diverse experimental settings.
πŸ“ Abstract
Web agents based on large language models (LLMs) rely on observations of web pages -- commonly represented as HTML -- as the basis for identifying available actions and planning subsequent steps. Prior work has treated the verbosity of HTML as an obstacle to performance and adopted observation reduction as a standard practice. We revisit this trend and demonstrate that the optimal observation representation depends on model capability and thinking token budget: (1) compact observations (accessibility trees) are preferable for lower-capability models, while detailed observations (HTML) are advantageous for higher-capability models; moreover, increasing thinking tokens further amplifies the benefit of HTML. (2) Our error analysis suggests that higher-capability models exploit layout information in HTML for better action grounding, while lower-capability models suffer from increased hallucination under longer inputs. We also find that incorporating observation history improves performance across most models and settings, and a diff-based representation offers a token-efficient alternative. Based on these findings, we suggest practical guidelines: adaptively select observation representations based on model capability and thinking token budget, and incorporate observation history using diff-based representations.
Problem

Research questions and friction points this paper is trying to address.

observation reduction
web agents
large language models
HTML representation
thinking token budget
Innovation

Methods, ideas, or system contributions that make the work stand out.

observation representation
web agents
large language models
thinking token budget
diff-based representation
πŸ”Ž Similar Papers
No similar papers found.