🤖 AI Summary
Autoregressive modeling fundamentally conflicts with the structural requirements of geospatial understanding, leading to generated outputs that lack spatial coherence and multi-object relational consistency. To address this, we propose GeoDiff—the first diffusion-based vision-language model explicitly designed for geospatial understanding—reformulating generation as a coarse-to-fine parallel semantic refinement process, thereby eliminating sequential modeling constraints. GeoDiff employs a unified vision-language diffusion architecture to achieve fine-grained cross-modal alignment. It achieves state-of-the-art performance across three core geospatial tasks: image captioning, visual grounding, and multi-object detection—consistently outperforming dominant autoregressive baselines. These results empirically validate the effectiveness and generalizability of parallel generative paradigms in geospatial understanding.
📝 Abstract
Autoregressive models are structurally misaligned with the inherently parallel nature of geospatial understanding, forcing a rigid sequential narrative onto scenes and fundamentally hindering the generation of structured and coherent outputs. We challenge this paradigm by reframing geospatial generation as a parallel refinement process, enabling a holistic, coarse-to-fine synthesis that resolves all semantic elements simultaneously. To operationalize this, we introduce GeoDiT, the first diffusion-based vision-language model tailored for the geospatial domain. Extensive experiments demonstrate that GeoDiT establishes a new state-of-the-art on benchmarks requiring structured, object-centric outputs. It achieves significant gains in image captioning, visual grounding, and multi-object detection, precisely the tasks where autoregressive models falter. Our work validates that aligning the generative process with the data's intrinsic structure is key to unlocking superior performance in complex geospatial analysis.