Emergence WebVoyager: Toward Consistent and Transparent Evaluation of (Web) Agents in The Wild

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluations of web-based agents suffer from ambiguous task definitions and inconsistent operational protocols, hindering reliable and reproducible performance comparisons. This work addresses these limitations by reconstructing the WebVoyager benchmark and introducing a standardized evaluation framework that features precise task instantiation, systematic failure categorization, a multi-annotator labeling protocol, and cross-domain assessment methodology. The resulting framework achieves high inter-annotator agreement (95.9%). When applied to evaluate OpenAI Operator, it reveals a true success rate of 68.6%—substantially lower than the officially reported 87%—demonstrating significant overestimation in prior results. These findings underscore the framework’s critical role in enhancing the transparency, rigor, and comparability of agent evaluations on web tasks.
📝 Abstract
Reliable evaluation of AI agents operating in complex, real-world environments requires methodologies that are robust, transparent, and contextually aligned with the tasks agents are intended to perform. This study identifies persistent shortcomings in existing AI agent evaluation practices that are particularly acute in web agent evaluation, as exemplified by our audit of WebVoyager, including task-framing ambiguity and operational variability that hinder meaningful and reproducible performance comparisons. To address these challenges, we introduce Emergence WebVoyager, an enhanced version of the WebVoyager benchmark that standardizes evaluation methodology through clear guidelines for task instantiation, failure handling, annotation, and reporting. Emergence WebVoyager achieves an inter-annotator agreement of 95.9\%, indicating improved clarity and reliability in both task formulation and evaluation. Applying this framework to evaluate OpenAI Operator reveals substantial performance variation across domains and task types, with an overall success rate of 68.6\%, substantially lower than the 87\% previously reported by OpenAI, demonstrating the utility of our approach for more rigorous and comparable web agent evaluation.
Problem

Research questions and friction points this paper is trying to address.

AI agent evaluation
web agents
evaluation benchmark
reproducibility
task ambiguity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Web Agent Evaluation
Benchmark Standardization
Inter-annotator Agreement
Transparent AI Evaluation
Emergence WebVoyager
🔎 Similar Papers
No similar papers found.
D
Deepak Akkil
Emergence AI
Mowafak Allaham
Mowafak Allaham
Northwestern University
A
Amal Raj
Emergence AI
T
Tamer Abuelsaad
Emergence AI
Ravi Kokku
Ravi Kokku
Merlyn Mind Inc.