UrbanGraphEmbeddings: Learning and Evaluating Spatially Grounded Multimodal Embeddings for Urban Science

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing urban multimodal approaches, which lack explicit alignment between street-view images and structured spatial graphs, thereby hindering tasks that rely on spatial reasoning. To bridge this gap, the authors introduce UGData, the first explicitly spatially anchored multimodal dataset, and propose UGE—a two-stage training framework that integrates instruction-guided contrastive learning with graph-structured spatial encoding to jointly align images, text, and urban spatial graphs. Evaluated on the newly established benchmark UGBench, UGE built upon Qwen2.5-VL-7B achieves performance gains of 44% and 30% on image retrieval and geolocation ranking tasks in training cities, respectively, and maintains substantial improvements of over 30% and 22% in unseen cities, significantly enhancing cross-city spatial reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Learning transferable multimodal embeddings for urban environments is challenging because urban understanding is inherently spatial, yet existing datasets and benchmarks lack explicit alignment between street-view images and urban structure. We introduce UGData, a spatially grounded dataset that anchors street-view images to structured spatial graphs and provides graph-aligned supervision via spatial reasoning paths and spatial context captions, exposing distance, directionality, connectivity, and neighborhood context beyond image content. Building on UGData, we propose UGE, a two-stage training strategy that progressively and stably aligns images, text, and spatial structures by combining instruction-guided contrastive learning with graph-based spatial encoding. We finally introduce UGBench, a comprehensive benchmark to evaluate how spatially grounded embeddings support diverse urban understanding tasks -- including geolocation ranking, image retrieval, urban perception, and spatial grounding. We develop UGE on multiple state-of-the-art VLM backbones, including Qwen2-VL, Qwen2.5-VL, Phi-3-Vision, and LLaVA1.6-Mistral, and train fixed-dimensional spatial embeddings with LoRA tuning. UGE built upon Qwen2.5-VL-7B backbone achieves up to 44% improvement in image retrieval and 30% in geolocation ranking on training cities, and over 30% and 22% gains respectively on held-out cities, demonstrating the effectiveness of explicit spatial grounding for spatially intensive urban tasks.
Problem

Research questions and friction points this paper is trying to address.

urban science
multimodal embeddings
spatial grounding
street-view images
spatial graphs
Innovation

Methods, ideas, or system contributions that make the work stand out.

spatially grounded embeddings
multimodal urban representation
graph-based spatial encoding
instruction-guided contrastive learning
urban vision-language modeling
🔎 Similar Papers
No similar papers found.