π€ AI Summary
This work addresses the lack of a reliable evaluation benchmark for geospatial reasoning on real-world maps in current vision-language models. To this end, the authors introduce MapVerseβa large-scale geospatial question-answering benchmark built upon authentic map data, comprising 10 map categories, 1,025 map images, and 11,837 human-authored, multi-dimensional question-answer pairs. MapVerse provides the first large-scale, human-annotated, and diverse dataset for geospatial reasoning, overcoming prior limitations that relied on synthetic data and narrow scenarios. Systematic evaluation reveals that while existing models perform reasonably well on simple classification tasks, they exhibit significant deficiencies in higher-order tasks requiring complex spatial reasoning, thereby exposing a critical bottleneck in their geospatial understanding capabilities.
π Abstract
Maps are powerful carriers of structured and contextual knowledge, encompassing geography, demographics, infrastructure, and environmental patterns. Reasoning over such knowledge requires models to integrate spatial relationships, visual cues, real-world context, and domain-specific expertise-capabilities that current large language models (LLMs) and vision-language models (VLMs) still struggle to exhibit consistently. Yet, datasets used to benchmark VLMs on map-based reasoning remain narrow in scope, restricted to specific domains, and heavily reliant on artificially generated content (outputs from LLMs or pipeline-based methods), offering limited depth for evaluating genuine geospatial reasoning. To address this gap, we present MapVerse, a large-scale benchmark built on real-world maps. It comprises 11,837 human-authored question-answer pairs across 1,025 maps, spanning ten diverse map categories and multiple question categories for each. The dataset provides a rich setting for evaluating map reading, interpretation, and multimodal reasoning. We evaluate ten state-of-the-art models against our benchmark to establish baselines and quantify reasoning gaps. Beyond overall performance, we conduct fine-grained categorical analyses to assess model inference across multiple dimensions and investigate the visual factors shaping reasoning outcomes. Our findings reveal that while current VLMs perform competitively on classification-style tasks, both open- and closed-source models fall short on advanced tasks requiring complex spatial reasoning.