🤖 AI Summary
This study evaluates the practical capabilities and limitations of large language models (LLMs) in automating geospatial analysis and GIS workflows. Method: We introduce the first Python-based benchmark dataset for spatial analysis—comprising 50 real-world tasks—and propose a multidimensional evaluation framework validated by GIS domain experts, assessing workflow validity, structural alignment, semantic similarity, and code quality. Our methodology integrates expert annotation with automated metrics (e.g., CodeBLEU) and executable workflow validation. Contribution/Results: Closed-source models (e.g., GPT-4o-mini) achieve top performance (95% workflow validity, CodeBLEU = 0.39), while open-weight small models consistently underperform; complex spatial reasoning remains a critical bottleneck. This work establishes the first reproducible, expert-validated LLM evaluation paradigm for geospatial AI, bridging a significant gap in the literature and providing empirical foundations for intelligent GIS development.
📝 Abstract
Recent advances in large language models (LLMs) have fueled growing interest in automating geospatial analysis and GIS workflows, yet their actual capabilities remain uncertain. In this work, we call for rigorous evaluation of LLMs on well-defined geoprocessing tasks before making claims about full GIS automation. To this end, we present GeoAnalystBench, a benchmark of 50 Python-based tasks derived from real-world geospatial problems and carefully validated by GIS experts. Each task is paired with a minimum deliverable product, and evaluation covers workflow validity, structural alignment, semantic similarity, and code quality (CodeBLEU). Using this benchmark, we assess both proprietary and open source models. Results reveal a clear gap: proprietary models such as ChatGPT-4o-mini achieve high validity 95% and stronger code alignment (CodeBLEU 0.39), while smaller open source models like DeepSeek-R1-7B often generate incomplete or inconsistent workflows (48.5% validity, 0.272 CodeBLEU). Tasks requiring deeper spatial reasoning, such as spatial relationship detection or optimal site selection, remain the most challenging across all models. These findings demonstrate both the promise and limitations of current LLMs in GIS automation and provide a reproducible framework to advance GeoAI research with human-in-the-loop support.