🤖 AI Summary
This study systematically evaluates large language models’ (LLMs) capability to understand and reason about vector geographic entities and their topological spatial relationships, addressing the semantic gap between Well-Known Text (WKT) geometry representations and natural-language spatial descriptions.
Method: We propose a multi-path geospatial question-answering framework integrating geometric embedding encoding, few-shot prompting engineering, and natural-language parsing.
Contribution/Results: To our knowledge, this is the first work to quantitatively assess LLMs on inverse topological relation identification, dialectal toponym-to-formal spatial-relation mapping, and verifiable geometric object generation. Experiments show GPT-4 achieves 0.66+ accuracy on topological relation reasoning; the joint embedding-and-prompting approach attains an average accuracy exceeding 0.6; and generated geometries significantly improve geographic entity retrieval performance—demonstrating the feasibility and potential of LLMs for context-enhanced spatial reasoning.
📝 Abstract
Applying AI foundation models directly to geospatial datasets remains challenging due to their limited ability to represent and reason with geographical entities, specifically vector-based geometries and natural language descriptions of complex spatial relations. To address these issues, we investigate the extent to which a well-known-text (WKT) representation of geometries and their spatial relations (e.g., topological predicates) are preserved during spatial reasoning when the geospatial vector data are passed to large language models (LLMs) including GPT-3.5-turbo, GPT-4, and DeepSeek-R1-14B. Our workflow employs three distinct approaches to complete the spatial reasoning tasks for comparison, i.e., geometry embedding-based, prompt engineering-based, and everyday language-based evaluation. Our experiment results demonstrate that both the embedding-based and prompt engineering-based approaches to geospatial question-answering tasks with GPT models can achieve an accuracy of over 0.6 on average for the identification of topological spatial relations between two geometries. Among the evaluated models, GPT-4 with few-shot prompting achieved the highest performance with over 0.66 accuracy on topological spatial relation inference. Additionally, GPT-based reasoner is capable of properly comprehending inverse topological spatial relations and including an LLM-generated geometry can enhance the effectiveness for geographic entity retrieval. GPT-4 also exhibits the ability to translate certain vernacular descriptions about places into formal topological relations, and adding the geometry-type or place-type context in prompts may improve inference accuracy, but it varies by instance. The performance of these spatial reasoning tasks offers valuable insights for the refinement of LLMs with geographical knowledge towards the development of geo-foundation models capable of geospatial reasoning.