VisBrowse-Bench: Benchmarking Visual-Native Search for Multimodal Browsing Agents

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks for multimodal web-browsing agents largely overlook the role of native visual information on webpages in reasoning, thereby failing to adequately assess their visual reasoning capabilities. To address this gap, this work proposes VisBrowse-Bench, a novel benchmark centered on vision-native search that comprises 169 cross-domain visual question-answering instances. It enables cross-modal evidence verification through text–image retrieval and joint reasoning, and incorporates an agent workflow designed to actively gather and integrate visual information. VisBrowse-Bench is the first to systematically emphasize and evaluate the contribution of native webpage visuals, with all instances manually constructed and rigorously validated to ensure high quality. Experimental results reveal that even state-of-the-art models—such as Claude-4.6-Opus and o3-deep-research—achieve only 47.6% and 41.1% accuracy, respectively, highlighting significant limitations in current approaches.

Technology Category

Application Category

📝 Abstract
The rapid advancement of Multimodal Large Language Models (MLLMs) has enabled browsing agents to acquire and reason over multimodal information in the real world. But existing benchmarks suffer from two limitations: insufficient evaluation of visual reasoning ability and the neglect of native visual information of web pages in the reasoning chains. To address these challenges, we introduce a new benchmark for visual-native search, VisBrowse-Bench. It contains 169 VQA instances covering multiple domains and evaluates the models' visual reasoning capabilities during the search process through multimodal evidence cross-validation via text-image retrieval and joint reasoning. These data were constructed by human experts using a multi-stage pipeline and underwent rigorous manual verification. We additionally propose an agent workflow that can effectively drive the browsing agent to actively collect and reason over visual information during the search process. We comprehensively evaluated both open-source and closed-source models in this workflow. Experimental results show that even the best-performing model, Claude-4.6-Opus only achieves an accuracy of 47.6%, while the proprietary Deep Research model, o3-deep-research only achieves an accuracy of 41.1%. The code and data can be accessed at: https://github.com/ZhengboZhang/VisBrowse-Bench
Problem

Research questions and friction points this paper is trying to address.

multimodal browsing agents
visual reasoning
visual-native search
benchmarking
web multimodal information
Innovation

Methods, ideas, or system contributions that make the work stand out.

visual-native search
multimodal browsing agents
visual reasoning benchmark
multimodal evidence cross-validation
human-verified VQA dataset
🔎 Similar Papers
No similar papers found.