🤖 AI Summary
This work addresses the prevalent reliance of current vision-language models (VLMs) on hallucinated reasoning in geolocation tasks, which undermines their auditability due to a lack of grounding in actual visual evidence. To tackle this issue, the authors introduce GeoRC, the first chain-of-reasoning benchmark for geolocation, constructed from GeoGuessr game scenarios and enriched with 800 fine-grained visual reasoning chains annotated by domain experts across 500 scenes. They further propose an automated evaluation framework leveraging both LLM-as-a-judge (e.g., Qwen-3) and VLM-as-a-judge to systematically assess the accuracy and faithfulness of model-generated reasoning. Experiments reveal that while closed-source VLMs (e.g., Gemini, GPT-5) achieve high localization accuracy, their reasoning quality significantly lags behind human performance; open-source VLMs (e.g., Llama, Qwen) perform comparably to baselines that rely solely on positional priors, highlighting their limited capacity for fine-grained visual understanding.
📝 Abstract
Vision Language Models (VLMs) are good at recognizing the global location of a photograph -- their geolocation prediction accuracy rivals the best human experts. But many VLMs are startlingly bad at explaining which image evidence led to their prediction, even when their location prediction is correct. The reasoning chains produced by VLMs frequently hallucinate scene attributes to support their location prediction (e.g. phantom writing, imagined infrastructure, misidentified flora). In this paper, we introduce the first benchmark for geolocation reasoning chains. We focus on the global location prediction task in the popular GeoGuessr game which draws from Google Street View spanning more than 100 countries. We collaborate with expert GeoGuessr players, including the reigning world champion, to produce 800 ground truth reasoning chains for 500 query scenes. These expert reasoning chains address hundreds of different discriminative visual attributes such as license plate shape, architecture, and soil properties to name just a few. We evaluate LLM-as-a-judge and VLM-as-a-judge strategies for scoring VLM-generated reasoning chains against our expert reasoning chains and find that Qwen 3 LLM-as-a-judge correlates best with human scoring. Our benchmark reveals that while large, closed-source VLMs such as Gemini and GPT 5 rival human experts at prediction locations, they still lag behind human experts when it comes to producing auditable reasoning chains. Open weights VLMs such as Llama and Qwen catastrophically fail on our benchmark -- they perform only slightly better than a baseline in which an LLM hallucinates a reasoning chain with oracle knowledge of the photo location but no visual information at all. We believe the gap between human experts and VLMs on this task points to VLM limitations at extracting fine-grained visual attributes from high resolution images.