GaGA: Towards Interactive Global Geolocation Assistant

📅 2024-12-12
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Global image geolocation—predicting the geographic coordinates of an image’s capture location—faces challenges including coarse-grained localization, poor interpretability, and limited user controllability. To address these, we propose GaGA, the first interactive geolocation assistant that tightly integrates large vision-language models (LVLMs) with domain-specific geographic knowledge. GaGA introduces a multimodal reasoning framework featuring: (1) interactive prompt engineering enabling real-time user intervention, contextual cue supplementation, and error correction; and (2) a geographic knowledge-augmented decoding mechanism that explicitly grounds visual cues in structured world knowledge. Furthermore, we release MG-Geo, the first large-scale, high-quality multimodal geolocation dataset comprising 5 million image–text pairs. On the GWS15k benchmark, GaGA achieves state-of-the-art performance, improving top-1 country-level accuracy by 4.57% and city-level accuracy by 2.92%, while ensuring high precision, strong interpretability, and user controllability.

Technology Category

Application Category

📝 Abstract
Global geolocation, which seeks to predict the geographical location of images captured anywhere in the world, is one of the most challenging tasks in the field of computer vision. In this paper, we introduce an innovative interactive global geolocation assistant named GaGA, built upon the flourishing large vision-language models (LVLMs). GaGA uncovers geographical clues within images and combines them with the extensive world knowledge embedded in LVLMs to determine the geolocations while also providing justifications and explanations for the prediction results. We further designed a novel interactive geolocation method that surpasses traditional static inference approaches. It allows users to intervene, correct, or provide clues for the predictions, making the model more flexible and practical. The development of GaGA relies on the newly proposed Multi-modal Global Geolocation (MG-Geo) dataset, a comprehensive collection of 5 million high-quality image-text pairs. GaGA achieves state-of-the-art performance on the GWS15k dataset, improving accuracy by 4.57% at the country level and 2.92% at the city level, setting a new benchmark. These advancements represent a significant leap forward in developing highly accurate, interactive geolocation systems with global applicability.
Problem

Research questions and friction points this paper is trying to address.

Predicting global image geolocation using vision-language models
Enhancing geolocation accuracy with interactive user intervention
Creating a benchmark dataset for multi-modal geolocation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses large vision-language models for geolocation
Introduces interactive user intervention method
Employs Multi-modal Global Geolocation dataset
🔎 Similar Papers
No similar papers found.
Z
Zhiyang Dou
University of Chinese Academy of Sciences
Z
Zipeng Wang
University of Chinese Academy of Sciences
Xumeng Han
Xumeng Han
University of Chinese Academy of Sciences
Computer Vision
C
Chenhui Qiang
University of Chinese Academy of Sciences
Kuiran Wang
Kuiran Wang
University of Chinese Academy of Sciences
Object tracking Computer vision
Guorong Li
Guorong Li
University of Chinese Academy of Sciences
Computer VisionVisual TrackingMachine Learning
Z
Zhibei Huang
University of Chinese Academy of Sciences
Z
Zhenjun Han
University of Chinese Academy of Sciences