🤖 AI Summary
This work addresses the lack of standardized evaluation for general-purpose web-browsing AI agents. We introduce WebGames, the first fully client-side, dependency-free benchmark comprising 50+ interactive challenges spanning five dimensions: browser navigation, input comprehension, cognitive reasoning, workflow automation, and entertainment interaction. WebGames provides verifiable ground-truth solutions and a standardized evaluation protocol. Our systematic assessment reveals a substantial capability gap between state-of-the-art multimodal models (e.g., GPT-4o) and humans in everyday web interactions: average success rates are only 43.1% versus 95.7% for human users. The benchmark employs a lightweight, sandboxed execution environment enabling rapid iteration and fair, reproducible evaluation. WebGames is fully open-sourced and establishes a new de facto standard for evaluating web-based autonomous agents.
📝 Abstract
We introduce WebGames, a comprehensive benchmark suite designed to evaluate general-purpose web-browsing AI agents through a collection of 50+ interactive challenges. These challenges are specifically crafted to be straightforward for humans while systematically testing the limitations of current AI systems across fundamental browser interactions, advanced input processing, cognitive tasks, workflow automation, and interactive entertainment. Our framework eliminates external dependencies through a hermetic testing environment, ensuring reproducible evaluation with verifiable ground-truth solutions. We evaluate leading vision-language models including GPT-4o, Claude Computer-Use, Gemini-1.5-Pro, and Qwen2-VL against human performance. Results reveal a substantial capability gap, with the best AI system achieving only 43.1% success rate compared to human performance of 95.7%, highlighting fundamental limitations in current AI systems' ability to handle common web interaction patterns that humans find intuitive. The benchmark is publicly available at webgames.convergence.ai, offering a lightweight, client-side implementation that facilitates rapid evaluation cycles. Through its modular architecture and standardized challenge specifications, WebGames provides a robust foundation for measuring progress in development of more capable web-browsing agents.