🤖 AI Summary
This study addresses the challenge of automatically parsing narrative relative-location descriptions in biological specimen records (e.g., “5 km northeast of a river mouth”) into precise geographic coordinates. We propose an end-to-end geolocalization method leveraging large language models (LLMs), the first to adapt LLMs for complex relative-direction reasoning via a novel semantic spatial inference mechanism. To overcome limitations of prompt engineering alone, we introduce QLoRA-based efficient fine-tuning, enabling multilingual and cross-regional adaptation to heterogeneous biogeographic data. Additionally, we construct a multi-source, heterogeneous dataset and optimize prompting strategies. Experiments demonstrate that 65% of all records achieve localization errors ≤10 km; on New York State data, performance reaches 85% ≤10 km and 67% ≤1 km—substantially outperforming existing automated approaches.
📝 Abstract
Georeferencing text documents has typically relied on either gazetteer-based methods to assign geographic coordinates to place names, or on language modelling approaches that associate textual terms with geographic locations. However, many location descriptions specify positions relatively with spatial relationships, making geocoding based solely on place names or geo-indicative words inaccurate. This issue frequently arises in biological specimen collection records, where locations are often described through narratives rather than coordinates if they pre-date GPS. Accurate georeferencing is vital for biodiversity studies, yet the process remains labour-intensive, leading to a demand for automated georeferencing solutions. This paper explores the potential of Large Language Models (LLMs) to georeference complex locality descriptions automatically, focusing on the biodiversity collections domain. We first identified effective prompting patterns, then fine-tuned an LLM using Quantized Low-Rank Adaptation (QLoRA) on biodiversity datasets from multiple regions and languages. Our approach outperforms existing baselines with an average, across datasets, of 65% of records within a 10 km radius, for a fixed amount of training data. The best results (New York state) were 85% within 10km and 67% within 1km. The selected LLM performs well for lengthy, complex descriptions, highlighting its potential for georeferencing intricate locality descriptions.