🤖 AI Summary
Foundational models face significant challenges in interpreting gridded geospatial data (e.g., climate fields) due to dense numerical values, strong spatiotemporal dependencies, and heterogeneous multimodal representations (tables, heatmaps, geographic visualizations).
Method: We introduce GeoGridBench—the first dedicated multimodal benchmark for geospatial grid data—covering 16 climate variables across 150 locations with long-term temporal resolution and comprising 3,200 expert-crafted question-answer pairs. We systematically define and evaluate spatial-temporal dependency understanding and multimodal fusion capabilities, proposing eight domain-expert–designed scientific task templates spanning pointwise queries to cross-regional, cross-temporal reasoning.
Contribution/Results: Leveraging a fine-grained task-decomposition evaluation framework, we find vision-language models achieve the best overall performance and precisely characterize their capability boundaries in spatial localization, temporal trend identification, and cross-domain comparison—establishing a reproducible, extensible evaluation standard for geoscience AI.
📝 Abstract
We present GeoGrid-Bench, a benchmark designed to evaluate the ability of foundation models to understand geo-spatial data in the grid structure. Geo-spatial datasets pose distinct challenges due to their dense numerical values, strong spatial and temporal dependencies, and unique multimodal representations including tabular data, heatmaps, and geographic visualizations. To assess how foundation models can support scientific research in this domain, GeoGrid-Bench features large-scale, real-world data covering 16 climate variables across 150 locations and extended time frames. The benchmark includes approximately 3,200 question-answer pairs, systematically generated from 8 domain expert-curated templates to reflect practical tasks encountered by human scientists. These range from basic queries at a single location and time to complex spatiotemporal comparisons across regions and periods. Our evaluation reveals that vision-language models perform best overall, and we provide a fine-grained analysis of the strengths and limitations of different foundation models in different geo-spatial tasks. This benchmark offers clearer insights into how foundation models can be effectively applied to geo-spatial data analysis and used to support scientific research.