Tur[k]ingBench: A Challenge Benchmark for Web Agents

📅 2024-03-18
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the capability of advanced multimodal models to perform complex tasks in realistic, crowdsourced web environments. Method: We introduce WebBench, the first multimodal web agent benchmark grounded in authentic HTML pages—comprising 158 task categories and 32.2K naturally occurring web instances—using original crowdsourced HTML (not synthetic pages). It supports joint text-visual instruction understanding and features an end-to-end action mapping framework that translates LLM/VLM outputs into executable, evaluable web actions. Our approach integrates multimodal prompt engineering, DOM parsing, and action binding, alongside a unified cross-model evaluation protocol compatible with both language-only and vision-language models. Contribution/Results: Experiments on GPT-4, InternVL, and other state-of-the-art models show consistent, statistically significant improvement over random baselines; however, absolute task success rates remain low, exposing critical bottlenecks in real-world interface comprehension and fine-grained interactive reasoning.

Technology Category

Application Category

📝 Abstract
Can advanced multi-modal models effectively tackle complex web-based tasks? Such tasks are often found on crowdsourcing platforms, where crowdworkers engage in challenging micro-tasks within web-based environments. Building on this idea, we present TurkingBench, a benchmark consisting of tasks presented as web pages with textual instructions and multi-modal contexts. Unlike previous approaches that rely on artificially synthesized web pages, our benchmark uses natural HTML pages originally designed for crowdsourcing workers to perform various annotation tasks. Each task's HTML instructions are instantiated with different values derived from crowdsourcing tasks, creating diverse instances. This benchmark includes 32.2K instances spread across 158 tasks. To support the evaluation of TurkingBench, we have developed a framework that links chatbot responses to actions on web pages (e.g., modifying a text box, selecting a radio button). We assess the performance of cutting-edge private and open-source models, including language-only and vision-language models (such as GPT4 and InternVL), on this benchmark. Our results show that while these models outperform random chance, there is still significant room for improvement. We hope that this benchmark will drive progress in the evaluation and development of web-based agents.
Problem

Research questions and friction points this paper is trying to address.

Evaluate multi-modal models on web tasks
Assess performance on natural HTML pages
Drive progress in web-based agent development
Innovation

Methods, ideas, or system contributions that make the work stand out.

Natural HTML pages
Multi-modal contexts integration
Chatbot-web action framework
🔎 Similar Papers
No similar papers found.