Large Multi-modal Model Cartographic Map Comprehension for Textual Locality Georeferencing

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Historical biological specimen records often contain complex, unstructured locality descriptions lacking explicit geographic references, making manual georeferencing labor-intensive and costly; existing automated approaches neglect maps—the most critical tool for spatial reasoning. This paper introduces the first zero-shot map-aware georeferencing framework, integrating large multimodal models (LMMs) with map visual inputs and proposing a novel gridded spatial reasoning mechanism to precisely model and localize textual spatial relationships. The method requires no domain-specific annotated data, instead leveraging map context to enhance interpretation of ambiguous, nested, and relative location descriptions. Evaluated on a manually annotated dataset, it achieves an average localization error of approximately 1 km—substantially outperforming unimodal vision-language models and state-of-the-art georeferencing tools. Our approach delivers a scalable, high-accuracy automation solution for digitizing large-scale natural history collections.

Technology Category

Application Category

📝 Abstract
Millions of biological sample records collected in the last few centuries archived in natural history collections are un-georeferenced. Georeferencing complex locality descriptions associated with these collection samples is a highly labour-intensive task collection agencies struggle with. None of the existing automated methods exploit maps that are an essential tool for georeferencing complex relations. We present preliminary experiments and results of a novel method that exploits multi-modal capabilities of recent Large Multi-Modal Models (LMM). This method enables the model to visually contextualize spatial relations it reads in the locality description. We use a grid-based approach to adapt these auto-regressive models for this task in a zero-shot setting. Our experiments conducted on a small manually annotated dataset show impressive results for our approach ($sim$1 km Average distance error) compared to uni-modal georeferencing with Large Language Models and existing georeferencing tools. The paper also discusses the findings of the experiments in light of an LMM's ability to comprehend fine-grained maps. Motivated by these results, a practical framework is proposed to integrate this method into a georeferencing workflow.
Problem

Research questions and friction points this paper is trying to address.

Georeferencing biological samples with complex locality descriptions
Automating map-based spatial relation comprehension using LMMs
Improving accuracy over uni-modal georeferencing methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Large Multi-Modal Models for map comprehension
Grid-based approach for zero-shot georeferencing
Integrates visual and textual data for spatial relations
🔎 Similar Papers
No similar papers found.