🤖 AI Summary
Existing large language models (LLMs) lack standardized, multidimensional automated evaluation for geospatial code generation on Google Earth Engine (GEE).
Method: We propose the first multi-level, multimodal evaluation framework tailored for GEE, covering three task categories (unit, compositional, thematic), 26 geospatial data types, and 6,365 test cases. Our framework introduces a geospatial-specific assessment protocol integrating accuracy, resource consumption, execution efficiency, and error-type analysis, augmented with hallucination suppression and boundary testing. It leverages GEE’s Python API for execution-driven validation, dynamic error classification, and performance profiling.
Contribution/Results: We systematically evaluate 24 state-of-the-art LLMs (as of June 2025), revealing performance disparities across task complexity, model architecture, and deployment environments. The framework establishes a reproducible, domain-specific benchmark for geospatial code generation on GEE.
📝 Abstract
Geospatial code generation is becoming a key frontier in integrating artificial intelligence with geo-scientific analysis, yet standardised automated evaluation tools for this task remain absent. This study presents AutoGEEval++, an enhanced framework building on AutoGEEval, and the first automated assessment system for large language models (LLMs) generating geospatial code on Google Earth Engine (GEE). It supports diverse data modalities and varying task complexities. Built on the GEE Python API, AutoGEEval++ features a benchmark dataset-AutoGEEval++-Bench-with 6,365 test cases across 26 data types and three task categories: unit, combo, and theme tests. It includes a submission programme and a judge module to realise an end-to-end automated evaluation pipeline from code generation to execution-based validation. The framework adopts multi-dimensional metrics-accuracy, resource usage, run-time efficiency, and error types-balancing hallucination control and efficiency, and enabling boundary testing and error pattern analysis. Using AutoGEEval++, we evaluate 24 state-of-the-art LLMs (as of June 2025), including general-purpose, reasoning-enhanced, code-centric, and geoscience-specific models. Results reveal clear performance, stability, and error differences across task types, model designs, and deployment settings, confirming AutoGEEval++'s practical value and scalability in vertical-domain code generation. This work establishes the first standardised evaluation protocol and foundational benchmark for GEE-based LLM code generation, providing a unified basis for performance comparison and a methodological framework for systematic, domain-specific code evaluation.