GAEA: A Geolocation Aware Conversational Model

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image geolocation prediction models lack geographic semantic understanding and conversational interaction capabilities, and large multimodal models (LMMs) remain suboptimal for this task. Method: We propose GeoChat—the first conversational multimodal model tailored for image geolocation—introducing a novel geography-aware dialogue modeling paradigm. We construct GAEA, a large-scale geographic question-answering dataset comprising 800K images and 1.6M QA pairs, and design a 4K benchmark covering multi-granularity geographic semantics. Our approach synthesizes data using OpenStreetMap attributes and geographic context, and employs multi-stage instruction tuning with geographic knowledge enhancement. Results: GeoChat achieves substantial gains on geographic dialogue evaluation, outperforming LLaVA-OneVision (+25.69%) and GPT-4o (+8.28%). The model, dataset, and code are fully open-sourced.

Technology Category

Application Category

📝 Abstract
Image geolocalization, in which, traditionally, an AI model predicts the precise GPS coordinates of an image is a challenging task with many downstream applications. However, the user cannot utilize the model to further their knowledge other than the GPS coordinate; the model lacks an understanding of the location and the conversational ability to communicate with the user. In recent days, with tremendous progress of large multimodal models (LMMs) proprietary and open-source researchers have attempted to geolocalize images via LMMs. However, the issues remain unaddressed; beyond general tasks, for more specialized downstream tasks, one of which is geolocalization, LMMs struggle. In this work, we propose to solve this problem by introducing a conversational model GAEA that can provide information regarding the location of an image, as required by a user. No large-scale dataset enabling the training of such a model exists. Thus we propose a comprehensive dataset GAEA with 800K images and around 1.6M question answer pairs constructed by leveraging OpenStreetMap (OSM) attributes and geographical context clues. For quantitative evaluation, we propose a diverse benchmark comprising 4K image-text pairs to evaluate conversational capabilities equipped with diverse question types. We consider 11 state-of-the-art open-source and proprietary LMMs and demonstrate that GAEA significantly outperforms the best open-source model, LLaVA-OneVision by 25.69% and the best proprietary model, GPT-4o by 8.28%. Our dataset, model and codes are available
Problem

Research questions and friction points this paper is trying to address.

Enhances image geolocalization with conversational AI capabilities.
Addresses lack of specialized datasets for training geolocation-aware models.
Improves performance over existing models in geolocation tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conversational model for image geolocalization
Dataset with 800K images and 1.6M QA pairs
Outperforms LLaVA-OneVision and GPT-4o
🔎 Similar Papers
No similar papers found.