Ego2Web: A Web Agent Benchmark Grounded in Egocentric Videos

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical gap in existing web agent benchmarks, which largely ignore the user’s physical environment and thus fail to evaluate cross-modal tasks requiring integration of first-person visual perception with web interaction. To bridge this gap, we propose Ego2Web—the first end-to-end benchmark that jointly incorporates egocentric video and web-based tasks, covering realistic scenarios such as e-commerce and media retrieval. High-quality video-task pairs are constructed through an automated data generation pipeline followed by human validation. We further introduce Ego2WebJudge, a large language model–based automatic evaluation framework that achieves 84% agreement with human judgments, substantially outperforming existing methods. Experimental results reveal that current state-of-the-art agents perform poorly on Ego2Web, highlighting the urgent need for advances in multimodal agent capabilities.

Technology Category

Application Category

📝 Abstract
Multimodal AI agents are increasingly automating complex real-world workflows that involve online web execution. However, current web-agent benchmarks suffer from a critical limitation: they focus entirely on web-based interaction and perception, lacking grounding in the user's real-world physical surroundings. This limitation prevents evaluation in crucial scenarios, such as when an agent must use egocentric visual perception (e.g., via AR glasses) to recognize an object in the user's surroundings and then complete a related task online. To address this gap, we introduce Ego2Web, the first benchmark designed to bridge egocentric video perception and web agent execution. Ego2Web pairs real-world first-person video recordings with web tasks that require visual understanding, web task planning, and interaction in an online environment for successful completion. We utilize an automatic data-generation pipeline combined with human verification and refinement to curate well-constructed, high-quality video-task pairs across diverse web task types, including e-commerce, media retrieval, knowledge lookup, etc. To facilitate accurate and scalable evaluation for our benchmark, we also develop a novel LLM-as-a-Judge automatic evaluation method, Ego2WebJudge, which achieves approximately 84% agreement with human judgment, substantially higher than existing evaluation methods. Experiments with diverse SoTA agents on our Ego2Web show that their performance is weak, with substantial headroom across all task categories. We also conduct a comprehensive ablation study on task design, highlighting the necessity of accurate video understanding in the proposed task and the limitations of current agents. We hope Ego2Web can be a critical new resource for developing truly capable AI assistants that can seamlessly see, understand, and act across the physical and digital worlds.
Problem

Research questions and friction points this paper is trying to address.

web agent benchmark
egocentric video
multimodal AI
physical-digital grounding
real-world perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

egocentric vision
web agent benchmark
multimodal AI
LLM-as-a-Judge
physical-digital grounding
🔎 Similar Papers
No similar papers found.