🤖 AI Summary
Existing agent benchmarks struggle to evaluate the ability to integrate weak visual cues with multi-hop knowledge verification in complex real-world scenarios. This work proposes GeoBrowse, a geolocation benchmark that uniquely frames geolocation as a testbed for multimodal agent tool use. It introduces a hierarchical task design to disentangle visual understanding from knowledge-intensive reasoning and provides GATE, an agent framework equipped with five image-based tools and four knowledge-based tools, along with expert-annotated step-by-step reasoning trajectories. Experiments demonstrate that GATE significantly outperforms ablated variants—such as no-tool, search-only, or image-only approaches—in both critical evidence coverage and final accuracy, thereby validating the efficacy of structured, hierarchically aligned tool-use strategies.
📝 Abstract
Deep research agents integrate fragmented evidence through multi-step tool use. BrowseComp offers a text-only testbed for such agents, but existing multimodal benchmarks rarely require both weak visual cues composition and BrowseComp-style multi-hop verification. Geolocation is a natural testbed because answers depend on combining multiple ambiguous visual cues and validating them with open-web evidence. Thus, we introduce GeoBrowse, a geolocation benchmark that combines visual reasoning with knowledge-intensive multi-hop queries. Level 1 tests extracting and composing fragmented visual cues, and Level 2 increases query difficulty by injecting long-tail knowledge and obfuscating key entities. To support evaluation, we provide an agentic workflow GATE with five think-with-image tools and four knowledge-intensive tools, and release expert-annotated stepwise traces grounded in verifiable evidence for trajectory-level analysis. Experiments show that GATE outperforms direct inference and open-source agents, indicating that no-tool, search-only or image-only setups are insufficient. Gains come from coherent, level-specific tool-use plans rather than more tool calls, as they more reliably reach annotated key evidence steps and make fewer errors when integrating into the final decision. The GeoBrowse bernchmark and codes are provided in https://github.com/ornamentt/GeoBrowse