π€ AI Summary
Existing evaluations of LLM-based web agents are confined to sandboxed or manually designed tasks, lacking systematic assessment in realistic, open-web environments. Method: We propose the first arena-style evaluation paradigm for open-web navigation, integrating crowdsourced user tasks, real-time browser interaction, and A/B controlled experiments to construct a fine-grained, human-annotated dataset with both task-level and step-level feedback. Contribution/Results: Evaluating leading models on authentic web navigation, we identify three critical failure modes: CAPTCHA handling, modal dialog dismissal, and direct URL access. We find that o4-mini exhibits greater behavioral diversity, whereas DeepSeek-R1 frequently misleads users. The study exposes fundamental limitations in current agentsβ robustness and behavioral diversity, and establishes a scalable, reproducible methodology for evaluating web navigation capabilities.
π Abstract
LLM web agents now browse and take actions on the open web, yet current agent evaluations are constrained to sandboxed environments or artificial tasks. We introduce BrowserArena, a live open-web agent evaluation platform that collects user-submitted tasks, runs Arena-style head-to-head comparisons, and uses step-level human feedback to surface failure modes. Collecting and analyzing step-level annotations on the agent traces, we identify three consistent failure modes: captcha resolution, pop-up banner removal, and direct navigation to URLs. By constructing targeted datasets to further study these tasks, we discover variations in how different language models navigate these failure modes. We find, for example, that o4-mini deploys a wider variety of strategies to circumvent captcha resolution than other models and DeepSeek-R1 consistently misleads users about captcha resolution. Our findings surface both the diversity and brittleness of current web agents. More broadly, our benchmarking methodology provides an approach to evaluating and understanding web agent failure modes at scale.