π€ AI Summary
Current LLM-based web agents frequently commit irreversible errors (e.g., purchasing non-refundable tickets repeatedly) in long-horizon tasks due to the absence of an explicit βworld modelββi.e., the capacity to reason about action consequences and environmental dynamics. Method: This work empirically demonstrates, for the first time, the pervasive lack of world-modeling capability across mainstream LLMs, and proposes a world-model-enhanced web navigation agent featuring: (1) a natural-language state-transition abstraction grounded in state differences to enable efficient learning of environment dynamics; and (2) a world-model-augmented architecture integrating transition-focused observation abstraction, HTML semantic compression, difference description generation, and training-free policy optimization. Results: The agent achieves significant improvements in task success rates on WebArena and Mind2Web; compared to tree-search baselines, it substantially reduces API call costs and latency while requiring no policy fine-tuning.
π Abstract
Large language models (LLMs) have recently gained much attention in building autonomous agents. However, the performance of current LLM-based web agents in long-horizon tasks is far from optimal, often yielding errors such as repeatedly buying a non-refundable flight ticket. By contrast, humans can avoid such an irreversible mistake, as we have an awareness of the potential outcomes (e.g., losing money) of our actions, also known as the"world model". Motivated by this, our study first starts with preliminary analyses, confirming the absence of world models in current LLMs (e.g., GPT-4o, Claude-3.5-Sonnet, etc.). Then, we present a World-model-augmented (WMA) web agent, which simulates the outcomes of its actions for better decision-making. To overcome the challenges in training LLMs as world models predicting next observations, such as repeated elements across observations and long HTML inputs, we propose a transition-focused observation abstraction, where the prediction objectives are free-form natural language descriptions exclusively highlighting important state differences between time steps. Experiments on WebArena and Mind2Web show that our world models improve agents' policy selection without training and demonstrate our agents' cost- and time-efficiency compared to recent tree-search-based agents.