GeoAnalystBench: A GeoAI benchmark for assessing large language models for spatial analysis workflow and code generation

📅 2025-09-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study evaluates the practical capabilities and limitations of large language models (LLMs) in automating geospatial analysis and GIS workflows. Method: We introduce the first Python-based benchmark dataset for spatial analysis—comprising 50 real-world tasks—and propose a multidimensional evaluation framework validated by GIS domain experts, assessing workflow validity, structural alignment, semantic similarity, and code quality. Our methodology integrates expert annotation with automated metrics (e.g., CodeBLEU) and executable workflow validation. Contribution/Results: Closed-source models (e.g., GPT-4o-mini) achieve top performance (95% workflow validity, CodeBLEU = 0.39), while open-weight small models consistently underperform; complex spatial reasoning remains a critical bottleneck. This work establishes the first reproducible, expert-validated LLM evaluation paradigm for geospatial AI, bridging a significant gap in the literature and providing empirical foundations for intelligent GIS development.

Technology Category

Application Category

📝 Abstract
Recent advances in large language models (LLMs) have fueled growing interest in automating geospatial analysis and GIS workflows, yet their actual capabilities remain uncertain. In this work, we call for rigorous evaluation of LLMs on well-defined geoprocessing tasks before making claims about full GIS automation. To this end, we present GeoAnalystBench, a benchmark of 50 Python-based tasks derived from real-world geospatial problems and carefully validated by GIS experts. Each task is paired with a minimum deliverable product, and evaluation covers workflow validity, structural alignment, semantic similarity, and code quality (CodeBLEU). Using this benchmark, we assess both proprietary and open source models. Results reveal a clear gap: proprietary models such as ChatGPT-4o-mini achieve high validity 95% and stronger code alignment (CodeBLEU 0.39), while smaller open source models like DeepSeek-R1-7B often generate incomplete or inconsistent workflows (48.5% validity, 0.272 CodeBLEU). Tasks requiring deeper spatial reasoning, such as spatial relationship detection or optimal site selection, remain the most challenging across all models. These findings demonstrate both the promise and limitations of current LLMs in GIS automation and provide a reproducible framework to advance GeoAI research with human-in-the-loop support.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs' capabilities for geospatial analysis workflow automation
Evaluating code generation quality for real-world geoprocessing Python tasks
Identifying performance gaps in spatial reasoning tasks across models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark for LLM geospatial task evaluation
Python tasks from real-world GIS problems
Multi-metric assessment including CodeBLEU quality
🔎 Similar Papers
No similar papers found.
Q
Qianheng Zhang
Geospatial Data Science Lab, Department of Geography, University of Wisconsin-Madison
S
Song Gao
Geospatial Data Science Lab, Department of Geography, University of Wisconsin-Madison
C
Chen Wei
Geospatial Data Science Lab, Department of Geography, University of Wisconsin-Madison
Y
Yibo Zhao
Geospatial Data Science Lab, Department of Geography, University of Wisconsin-Madison
Y
Ying Nie
Geospatial Data Science Lab, Department of Geography, University of Wisconsin-Madison
Ziru Chen
Ziru Chen
The Ohio State University
Conversational AINatural Language ProcessingMachine Learning
Shijie Chen
Shijie Chen
PhD Student, The Ohio State University
Natural Language ProcessingMachine Learning
Y
Yu Su
Department of Computer Science and Engineering, The Ohio State University
Huan Sun
Huan Sun
Endowed CoE Innovation Scholar and Associate Professor, The Ohio State University
AgentsLarge Language ModelsNatural Language ProcessingAI