🤖 AI Summary
Existing deep research agents predominantly rely on textual inputs, limiting their capability to perform vision-language collaborative reasoning required for complex real-world information retrieval tasks.
Method: We propose BrowseAgent, the first intelligent agent framework explicitly designed for vision-language deep research, integrating multimodal perception, logical reasoning, and tool-augmented execution. To enable cold-start training, we curate high-quality synthetic trajectory data; we further introduce BrowseComp-VL, a dedicated benchmark for evaluating cross-modal retrieval performance. Our approach synergistically combines vision-language models, reinforcement learning, and synthetic-data-driven training.
Contribution/Results: Extensive experiments demonstrate that BrowseAgent significantly outperforms closed-source baselines, retrieval-augmented generation (RAG) pipelines, and state-of-the-art open-source agents across four challenging vision-question answering benchmarks, validating its superiority and practicality in multimodal information-seeking tasks.
📝 Abstract
Web agents such as Deep Research have demonstrated superhuman cognitive abilities, capable of solving highly challenging information-seeking problems. However, most research remains primarily text-centric, overlooking visual information in the real world. This makes multimodal Deep Research highly challenging, as such agents require much stronger reasoning abilities in perception, logic, knowledge, and the use of more sophisticated tools compared to text-based agents. To address this limitation, we introduce WebWatcher, a multi-modal Agent for Deep Research equipped with enhanced visual-language reasoning capabilities. It leverages high-quality synthetic multimodal trajectories for efficient cold start training, utilizes various tools for deep reasoning, and further enhances generalization through reinforcement learning. To better evaluate the capabilities of multimodal agents, we propose BrowseComp-VL, a benchmark with BrowseComp-style that requires complex information retrieval involving both visual and textual information. Experimental results show that WebWatcher significantly outperforms proprietary baseline, RAG workflow and open-source agents in four challenging VQA benchmarks, which paves the way for solving complex multimodal information-seeking tasks.