🤖 AI Summary
Existing large vision-language models (LVLMs) perform well on city-level coarse-grained geolocation but struggle with fine-grained street-level address localization from street-view imagery. To address this, we propose a cross-view alignment fine-tuning framework that leverages perspective-invariant satellite imagery as macroscopic contextual cues. Our method integrates street-view and satellite image mosaicking, self-supervised label generation, and a two-stage training pipeline—first optimizing cross-view feature alignment, then refining address prediction—to enhance the model’s global spatial understanding of street layouts. Evaluated on the Pittsburgh and San Francisco street-view benchmarks, our approach achieves average address localization accuracy improvements of 9% and 12%, respectively, over state-of-the-art LVLMs. This work advances fine-grained, queryable visual geolocation research by bridging semantic and geometric disparities across heterogeneous visual perspectives.
📝 Abstract
Large visual language models (LVLMs) have demonstrated impressive performance in coarse-grained geo-localization at the country or city level, but they struggle with fine-grained street-level localization within urban areas. In this paper, we explore integrating city-wide address localization capabilities into LVLMs, facilitating flexible address-related question answering using street-view images. A key challenge is that the street-view visual question-and-answer (VQA) data provides only microscopic visual cues, leading to subpar performance in fine-tuned models. To tackle this issue, we incorporate perspective-invariant satellite images as macro cues and propose cross-view alignment tuning including a satellite-view and street-view image grafting mechanism, along with an automatic label generation mechanism. Then LVLM's global understanding of street distribution is enhanced through cross-view matching. Our proposed model, named AddressVLM, consists of two-stage training protocols: cross-view alignment tuning and address localization tuning. Furthermore, we have constructed two street-view VQA datasets based on image address localization datasets from Pittsburgh and San Francisco. Qualitative and quantitative evaluations demonstrate that AddressVLM outperforms counterpart LVLMs by over 9% and 12% in average address localization accuracy on these two datasets, respectively.