🤖 AI Summary
This study systematically evaluates the capability and consistency of black-box vision-language models (VLMs)—such as GPT-4V and LLaVA—to perform image geolocation *in zero-shot, no-fine-tuning, and feature-inaccessible settings*. Methodologically, we design three realistic evaluation scenarios: fixed textual prompts, semantically equivalent prompt variants, and image-based queries, and introduce *output consistency*—measured across prompt perturbations—as a core evaluation metric. Experiments reveal that state-of-the-art black-box VLMs achieve low geolocation accuracy and exhibit high output variance, substantially underperforming dedicated supervised geolocation models; their limitations stem primarily from insufficient spatial semantic reasoning and poor generalization of geographic knowledge. This work not only uncovers a critical bottleneck for zero-shot deployment of VLMs in embodied intelligence applications (e.g., robot localization), but also establishes the first zero-shot benchmark and consistency-aware evaluation framework specifically for geolocation in black-box VLMs.
📝 Abstract
The advances in Vision-Language models (VLMs) offer exciting opportunities for robotic applications involving image geo-localization, the problem of identifying the geo-coordinates of a place based on visual data only. Recent research works have focused on using a VLM as embeddings extractor for geo-localization, however, the most sophisticated VLMs may only be available as black boxes that are accessible through an API, and come with a number of limitations: there is no access to training data, model features and gradients; retraining is not possible; the number of predictions may be limited by the API; training on model outputs is often prohibited; and queries are open-ended. The utilization of a VLM as a stand-alone, zero-shot geo-localization system using a single text-based prompt is largely unexplored. To bridge this gap, this paper undertakes the first systematic study, to the best of our knowledge, to investigate the potential of some of the state-of-the-art VLMs as stand-alone, zero-shot geo-localization systems in a black-box setting with realistic constraints. We consider three main scenarios for this thorough investigation: a) fixed text-based prompt; b) semantically-equivalent text-based prompts; and c) semantically-equivalent query images. We also take into account the auto-regressive and probabilistic generation process of the VLMs when investigating their utility for geo-localization task by using model consistency as a metric in addition to traditional accuracy. Our work provides new insights in the capabilities of different VLMs for the above-mentioned scenarios.