BrowserArena: Evaluating LLM Agents on Real-World Web Navigation Tasks

πŸ“… 2025-10-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing evaluations of LLM-based web agents are confined to sandboxed or manually designed tasks, lacking systematic assessment in realistic, open-web environments. Method: We propose the first arena-style evaluation paradigm for open-web navigation, integrating crowdsourced user tasks, real-time browser interaction, and A/B controlled experiments to construct a fine-grained, human-annotated dataset with both task-level and step-level feedback. Contribution/Results: Evaluating leading models on authentic web navigation, we identify three critical failure modes: CAPTCHA handling, modal dialog dismissal, and direct URL access. We find that o4-mini exhibits greater behavioral diversity, whereas DeepSeek-R1 frequently misleads users. The study exposes fundamental limitations in current agents’ robustness and behavioral diversity, and establishes a scalable, reproducible methodology for evaluating web navigation capabilities.

Technology Category

Application Category

πŸ“ Abstract
LLM web agents now browse and take actions on the open web, yet current agent evaluations are constrained to sandboxed environments or artificial tasks. We introduce BrowserArena, a live open-web agent evaluation platform that collects user-submitted tasks, runs Arena-style head-to-head comparisons, and uses step-level human feedback to surface failure modes. Collecting and analyzing step-level annotations on the agent traces, we identify three consistent failure modes: captcha resolution, pop-up banner removal, and direct navigation to URLs. By constructing targeted datasets to further study these tasks, we discover variations in how different language models navigate these failure modes. We find, for example, that o4-mini deploys a wider variety of strategies to circumvent captcha resolution than other models and DeepSeek-R1 consistently misleads users about captcha resolution. Our findings surface both the diversity and brittleness of current web agents. More broadly, our benchmarking methodology provides an approach to evaluating and understanding web agent failure modes at scale.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM web agents on real-world navigation tasks
Identifying common failure modes in web agent interactions
Benchmarking methodology for understanding web agent limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Live open-web platform for real-world agent evaluation
Step-level human feedback to identify failure modes
Targeted datasets to analyze model strategy variations
πŸ”Ž Similar Papers
No similar papers found.