🤖 AI Summary
Prior work lacks a comprehensive analysis of generative vision-language models’ (VLMs) accuracy, generalization boundaries, and unintentional privacy leakage in visual-only image geolocalization. Method: We introduce the first multidimensional evaluation framework, empirically testing 25 state-of-the-art VLMs across four benchmark datasets—including street-view and social-media imagery—to assess geolocation performance, robustness, and privacy implications. Contribution/Results: VLMs achieve 61% geolocation accuracy on social-media images—significantly outperforming their performance on street-view imagery—demonstrating strong but highly context-dependent spatial reasoning. Critically, their implicit location inference capability poses substantial privacy risks, enabling potential user tracking and surveillance without explicit metadata. Our analysis identifies key sources of bias, scene-specific limitations, and regulatory gaps in current AI governance frameworks. This work provides foundational empirical evidence and methodological tools to inform privacy-preserving design and policy for multimodal foundation models.
📝 Abstract
Geo-localization is the task of identifying the location of an image using visual cues alone. It has beneficial applications, such as improving disaster response, enhancing navigation, and geography education. Recently, Vision-Language Models (VLMs) are increasingly demonstrating capabilities as accurate image geo-locators. This brings significant privacy risks, including those related to stalking and surveillance, considering the widespread uses of AI models and sharing of photos on social media. The precision of these models is likely to improve in the future. Despite these risks, there is little work on systematically evaluating the geolocation precision of Generative VLMs, their limits and potential for unintended inferences. To bridge this gap, we conduct a comprehensive assessment of the geolocation capabilities of 25 state-of-the-art VLMs on four benchmark image datasets captured in diverse environments. Our results offer insight into the internal reasoning of VLMs and highlight their strengths, limitations, and potential societal risks. Our findings indicate that current VLMs perform poorly on generic street-level images yet achieve notably high accuracy (61%) on images resembling social media content, raising significant and urgent privacy concerns.