VLM-Guided Visual Place Recognition for Planet-Scale Geo-Localization

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the extreme “kidnapped robot” problem of planetary-scale single-image geolocalization, targeting robustness bottlenecks in navigation, autonomous driving, and disaster response caused by drastic environmental, illumination, seasonal, and viewpoint variations. We propose the first hybrid framework integrating Vision-Language Models (VLMs) with Visual Place Recognition (VPR): VLMs generate semantic geographic priors to dynamically constrain the candidate retrieval space; a geography-aware re-ranking mechanism further enhances matching accuracy and interpretability. Evaluated on multiple standard benchmarks, our method significantly outperforms state-of-the-art approaches—achieving absolute improvements of 4.51% in street-level and 13.52% in city-level localization accuracy. To the best of our knowledge, this is the first solution enabling semantic-guided, highly robust, interpretable, and globally scalable single-image geolocalization.

Technology Category

Application Category

📝 Abstract
Geo-localization from a single image at planet scale (essentially an advanced or extreme version of the kidnapped robot problem) is a fundamental and challenging task in applications such as navigation, autonomous driving and disaster response due to the vast diversity of locations, environmental conditions, and scene variations. Traditional retrieval-based methods for geo-localization struggle with scalability and perceptual aliasing, while classification-based approaches lack generalization and require extensive training data. Recent advances in vision-language models (VLMs) offer a promising alternative by leveraging contextual understanding and reasoning. However, while VLMs achieve high accuracy, they are often prone to hallucinations and lack interpretability, making them unreliable as standalone solutions. In this work, we propose a novel hybrid geo-localization framework that combines the strengths of VLMs with retrieval-based visual place recognition (VPR) methods. Our approach first leverages a VLM to generate a prior, effectively guiding and constraining the retrieval search space. We then employ a retrieval step, followed by a re-ranking mechanism that selects the most geographically plausible matches based on feature similarity and proximity to the initially estimated coordinates. We evaluate our approach on multiple geo-localization benchmarks and show that it consistently outperforms prior state-of-the-art methods, particularly at street (up to 4.51%) and city level (up to 13.52%). Our results demonstrate that VLM-generated geographic priors in combination with VPR lead to scalable, robust, and accurate geo-localization systems.
Problem

Research questions and friction points this paper is trying to address.

Planet-scale geo-localization from single images
Overcoming scalability and perceptual aliasing in traditional methods
Reducing VLM hallucinations for reliable geo-localization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid framework combining VLMs and VPR
VLM generates prior to guide retrieval
Re-ranking based on similarity and proximity
🔎 Similar Papers
No similar papers found.