An Illusion of Progress? Assessing the Current State of Web Agents

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies a critical overestimation of large language model (LLM)-driven autonomous web agents’ real-world capabilities by existing benchmarks. To address this, we introduce Online-Mind2Web—the first evaluation benchmark grounded in authentic online interaction scenarios—comprising 300 diverse tasks across 136 dynamic websites. We further propose an LLM-as-a-Judge automatic evaluation framework with high inter-annotator consistency (85% agreement with human judgments, +30% over baseline methods). Through systematic evaluation of state-of-the-art web agents, we uncover previously unreported capability bottlenecks in three core dimensions: navigation, form-filling, and temporal dependency handling. Our work establishes a more reliable, scalable, and user-aligned evaluation paradigm for web agents, accompanied by practical technical infrastructure to support rigorous, realistic assessment.

Technology Category

Application Category

📝 Abstract
As digitalization and cloud technologies evolve, the web is becoming increasingly important in the modern society. Autonomous web agents based on large language models (LLMs) hold a great potential in work automation. It is therefore important to accurately measure and monitor the progression of their capabilities. In this work, we conduct a comprehensive and rigorous assessment of the current state of web agents. Our results depict a very different picture of the competency of current agents, suggesting over-optimism in previously reported results. This gap can be attributed to shortcomings in existing benchmarks. We introduce Online-Mind2Web, an online evaluation benchmark consisting of 300 diverse and realistic tasks spanning 136 websites. It enables us to evaluate web agents under a setting that approximates how real users use these agents. To facilitate more scalable evaluation and development, we also develop a novel LLM-as-a-Judge automatic evaluation method and show that it can achieve around 85% agreement with human judgment, substantially higher than existing methods. Finally, we present the first comprehensive comparative analysis of current web agents, highlighting both their strengths and limitations to inspire future research.
Problem

Research questions and friction points this paper is trying to address.

Assessing competency gaps in current web agents
Introducing Online-Mind2Web for realistic task evaluation
Developing LLM-as-a-Judge for scalable agent assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online-Mind2Web benchmark for realistic tasks
LLM-as-a-Judge automatic evaluation method
Comprehensive comparative analysis of web agents
🔎 Similar Papers
No similar papers found.