AutoGEEval++: A Multi-Level and Multi-Geospatial-Modality Automated Evaluation Framework for Large Language Models in Geospatial Code Generation on Google Earth Engine

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large language models (LLMs) lack standardized, multidimensional automated evaluation for geospatial code generation on Google Earth Engine (GEE). Method: We propose the first multi-level, multimodal evaluation framework tailored for GEE, covering three task categories (unit, compositional, thematic), 26 geospatial data types, and 6,365 test cases. Our framework introduces a geospatial-specific assessment protocol integrating accuracy, resource consumption, execution efficiency, and error-type analysis, augmented with hallucination suppression and boundary testing. It leverages GEE’s Python API for execution-driven validation, dynamic error classification, and performance profiling. Contribution/Results: We systematically evaluate 24 state-of-the-art LLMs (as of June 2025), revealing performance disparities across task complexity, model architecture, and deployment environments. The framework establishes a reproducible, domain-specific benchmark for geospatial code generation on GEE.

Technology Category

Application Category

📝 Abstract
Geospatial code generation is becoming a key frontier in integrating artificial intelligence with geo-scientific analysis, yet standardised automated evaluation tools for this task remain absent. This study presents AutoGEEval++, an enhanced framework building on AutoGEEval, and the first automated assessment system for large language models (LLMs) generating geospatial code on Google Earth Engine (GEE). It supports diverse data modalities and varying task complexities. Built on the GEE Python API, AutoGEEval++ features a benchmark dataset-AutoGEEval++-Bench-with 6,365 test cases across 26 data types and three task categories: unit, combo, and theme tests. It includes a submission programme and a judge module to realise an end-to-end automated evaluation pipeline from code generation to execution-based validation. The framework adopts multi-dimensional metrics-accuracy, resource usage, run-time efficiency, and error types-balancing hallucination control and efficiency, and enabling boundary testing and error pattern analysis. Using AutoGEEval++, we evaluate 24 state-of-the-art LLMs (as of June 2025), including general-purpose, reasoning-enhanced, code-centric, and geoscience-specific models. Results reveal clear performance, stability, and error differences across task types, model designs, and deployment settings, confirming AutoGEEval++'s practical value and scalability in vertical-domain code generation. This work establishes the first standardised evaluation protocol and foundational benchmark for GEE-based LLM code generation, providing a unified basis for performance comparison and a methodological framework for systematic, domain-specific code evaluation.
Problem

Research questions and friction points this paper is trying to address.

Lacks standardized tools for evaluating geospatial code generation by LLMs
Needs multi-modal support for diverse geospatial data and tasks
Requires automated assessment of code accuracy, efficiency, and errors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-dimensional metrics for code evaluation
End-to-end automated evaluation pipeline
Benchmark dataset with diverse test cases
🔎 Similar Papers
No similar papers found.
S
Shuyang Hou
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan, China
Z
Zhangxiao Shen
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan, China
Huayi Wu
Huayi Wu
Wuhan University
GISremote sensingcartographyGeomatics
Haoyue Jiao
Haoyue Jiao
Wuhan University
GeoAILarge Language ModelCode Generation
Z
Ziqi Liu
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan, China
L
Lutong Xie
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan, China
C
Chang Liu
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan, China
Jianyuan Liang
Jianyuan Liang
Wuhan University
GIS SystemGIServiceSpatial Data MiningGraph RAG
Y
Yaxian Qing
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan, China
X
Xiaopu Zhang
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan, China
D
D. Peng
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan, China
Zhipeng Gui
Zhipeng Gui
Professor of GIScience, Wuhan University
GeoAISpatiotemporal Data AnalysisWeb Service & QoSHigh Performance Computing
Xuefeng Guan
Xuefeng Guan
Professor, Wuhan University
High-performance GeoComputationBig-data AnalyticsSpatial Data Mining