🤖 AI Summary
This work addresses the challenges of pixel-level satellite image synthesis and spatially aware editing from semantic polygon annotations. We propose a novel method integrating vision-language alignment with diffusion modeling. Our core innovation is a dense cross-modal alignment mechanism that establishes fine-grained spatial correspondence between vector polygon attributes—such as class, boundary geometry, and topological relations—and image pixels, enabling geometry-constrained conditional generation and language-guided interactive editing. Unlike prior approaches relying solely on text prompts or coarse layout representations, our method significantly improves semantic fidelity and geometric realism. Extensive evaluation across diverse urban scenes demonstrates superior spatial grounding capability and high-fidelity generation quality. The framework establishes a new paradigm for map-driven content generation and supports downstream applications in urban planning and geospatial analysis.
📝 Abstract
We introduce VectorSynth, a diffusion-based framework for pixel-accurate satellite image synthesis conditioned on polygonal geographic annotations with semantic attributes. Unlike prior text- or layout-conditioned models, VectorSynth learns dense cross-modal correspondences that align imagery and semantic vector geometry, enabling fine-grained, spatially grounded edits. A vision language alignment module produces pixel-level embeddings from polygon semantics; these embeddings guide a conditional image generation framework to respect both spatial extents and semantic cues. VectorSynth supports interactive workflows that mix language prompts with geometry-aware conditioning, allowing rapid what-if simulations, spatial edits, and map-informed content generation. For training and evaluation, we assemble a collection of satellite scenes paired with pixel-registered polygon annotations spanning diverse urban scenes with both built and natural features. We observe strong improvements over prior methods in semantic fidelity and structural realism, and show that our trained vision language model demonstrates fine-grained spatial grounding. The code and data are available at https://github.com/mvrl/VectorSynth.