🤖 AI Summary
This work addresses the challenge of enhancing geographic fidelity in image generation and 3D scene reconstruction. We propose a GPS-conditioned diffusion model that encodes raw latitude–longitude coordinates—extracted from photo metadata—into geospatial embeddings, enabling joint text–geography generation. Furthermore, we introduce GPS-constrained Score Distillation Sampling (SDS), which injects geometric consistency priors during 2D diffusion sampling to facilitate geo-aware 3D reconstruction. Crucially, this is the first approach to directly utilize raw GPS coordinates—not coarse regional labels—as generative control signals. Experiments demonstrate a substantial improvement in geolocation classification accuracy for generated images. In 3D reconstruction, structural error decreases by 18.7% compared to baselines. Both quantitative metrics and qualitative evaluations confirm synergistic improvements in geographic semantic alignment and geometric consistency.
📝 Abstract
We show that the GPS tags contained in photo metadata provide a useful control signal for image generation. We train GPS-to-image models and use them for tasks that require a fine-grained understanding of how images vary within a city. In particular, we train a diffusion model to generate images conditioned on both GPS and text. The learned model generates images that capture the distinctive appearance of different neighborhoods, parks, and landmarks. We also extract 3D models from 2D GPS-to-image models through score distillation sampling, using GPS conditioning to constrain the appearance of the reconstruction from each viewpoint. Our evaluations suggest that our GPS-conditioned models successfully learn to generate images that vary based on location, and that GPS conditioning improves estimated 3D structure.