🤖 AI Summary
This work investigates the reasoning capabilities of large language model (LLM)-driven WebAgents under long contexts (25K–150K tokens), focusing on realistic web tasks requiring multi-step interaction history for information retrieval and sequentially dependent decision-making. To address the lack of long-range interaction modeling in prior work, we introduce the first benchmark with explicit subtask dependency annotations and propose a novel evaluation framework: injecting irrelevant conversation trajectories to simulate multi-turn user behavior, and incorporating an implicit RAG mechanism to generate task-oriented summaries that mitigate context forgetting. Experiments across state-of-the-art models—including Claude-3.7 and GPT-4.1—reveal a sharp decline in success rate from 40–50% to <10% as context length increases, primarily due to recurrent hallucination (“looping”) and goal drift. This study is the first to systematically identify core bottlenecks in WebAgent long-context reasoning and provides a reproducible benchmark and methodological paradigm for optimizing long-range reasoning.
📝 Abstract
As large language model (LLM)-based agents become increasingly integrated into daily digital interactions, their ability to reason across long interaction histories becomes crucial for providing personalized and contextually aware assistance. However, the performance of these agents in long context scenarios, particularly for action-taking WebAgents operating in realistic web environments, remains largely unexplored. This paper introduces a benchmark for evaluating long context reasoning capabilities of WebAgents through sequentially dependent subtasks that require retrieval and application of information from extended interaction histories. We develop a novel evaluation framework that simulates multi-session user interactions by injecting irrelevant task trajectories between dependent subtasks, creating contexts ranging from 25,000 to 150,000 tokens. Through extensive evaluation of four popular models, Claude-3.7, GPT-4.1, Llama 4, and o4-mini, we observe a dramatic performance degradation as context length increases, with success rates dropping from 40-50% in baseline conditions to less than 10% in long context scenarios. Our detailed error analysis reveals that agents primarily fail due to getting stuck in loops and losing track of original task objectives. We further propose an implicit RAG approach that provides modest improvements by generating task-relevant summaries, though fundamental limitations in long context reasoning persist. These findings highlight critical challenges for deploying WebAgents in realistic, long-term user interaction scenarios and provide insights for developing more robust agent architectures capable of maintaining coherent task execution across extended contexts.