🤖 AI Summary
Existing web-browsing agents exhibit insufficient robustness on tedious, time-consuming, and human-avoided daily chores. Method: We introduce WebChoreArena—the first dedicated benchmark for evaluating agents on such tasks—comprising 532 realistic web-based chores. It systematically defines and evaluates agent capabilities across three novel challenges: long-horizon memory over voluminous information, precise mathematical computation, and cross-page state tracking. WebChoreArena strictly replicates the four simulation environments of WebArena, maintains full protocol compatibility, and integrates LLM-driven action frameworks, observation compression, structured information extraction, and explicit cross-page state tracking. Contribution/Results: Experiments reveal that while models like GPT-4o show steady improvement, Gemini 2.5 Pro underperforms significantly on WebChoreArena relative to its WebArena results—demonstrating the benchmark’s heightened difficulty and strong discriminative power for real-world chore-solving capability.
📝 Abstract
Powered by a large language model (LLM), a web browsing agent operates web browsers in a human-like manner and offers a highly transparent path toward automating a wide range of everyday tasks. As web agents become increasingly capable and demonstrate proficiency in general browsing tasks, a critical question emerges: Can they go beyond general browsing to robustly handle tasks that are tedious and complex, or chores that humans often avoid doing themselves? In this paper, we introduce WebChoreArena, a new fully reproducible benchmark comprising 532 carefully curated tasks designed to extend the scope of WebArena beyond general browsing to more labor-intensive and tedious tasks. WebChoreArena systematically integrates three key challenges: (i) Massive Memory tasks requiring accurate retrieval of large amounts of information in the observations, (ii) Calculation tasks demanding precise mathematical reasoning, and (iii) Long-Term Memory tasks necessitating long-term memory across multiple webpages. Built on top of the fully reproducible and widely adopted four WebArena simulation environments, WebChoreArena ensures strict reproducibility and enables fair, direct comparisons with the established WebArena benchmark, offering key insights into agent progress. Our experimental results demonstrate that as LLMs evolve, represented by GPT-4o, Claude 3.7 Sonnet, and Gemini 2.5 Pro, significant improvements in performance are observed on WebChoreArena. These findings suggest that WebChoreArena is well-suited to measure the advancement of state-of-the-art LLMs with greater clarity. Nevertheless, the results also indicate that even with Gemini 2.5 Pro, there remains substantial room for improvement compared to WebArena, highlighting the increased challenges posed by WebChoreArena.