VectorSynth: Fine-Grained Satellite Image Synthesis with Structured Semantics

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of pixel-level satellite image synthesis and spatially aware editing from semantic polygon annotations. We propose a novel method integrating vision-language alignment with diffusion modeling. Our core innovation is a dense cross-modal alignment mechanism that establishes fine-grained spatial correspondence between vector polygon attributes—such as class, boundary geometry, and topological relations—and image pixels, enabling geometry-constrained conditional generation and language-guided interactive editing. Unlike prior approaches relying solely on text prompts or coarse layout representations, our method significantly improves semantic fidelity and geometric realism. Extensive evaluation across diverse urban scenes demonstrates superior spatial grounding capability and high-fidelity generation quality. The framework establishes a new paradigm for map-driven content generation and supports downstream applications in urban planning and geospatial analysis.

Technology Category

Application Category

📝 Abstract
We introduce VectorSynth, a diffusion-based framework for pixel-accurate satellite image synthesis conditioned on polygonal geographic annotations with semantic attributes. Unlike prior text- or layout-conditioned models, VectorSynth learns dense cross-modal correspondences that align imagery and semantic vector geometry, enabling fine-grained, spatially grounded edits. A vision language alignment module produces pixel-level embeddings from polygon semantics; these embeddings guide a conditional image generation framework to respect both spatial extents and semantic cues. VectorSynth supports interactive workflows that mix language prompts with geometry-aware conditioning, allowing rapid what-if simulations, spatial edits, and map-informed content generation. For training and evaluation, we assemble a collection of satellite scenes paired with pixel-registered polygon annotations spanning diverse urban scenes with both built and natural features. We observe strong improvements over prior methods in semantic fidelity and structural realism, and show that our trained vision language model demonstrates fine-grained spatial grounding. The code and data are available at https://github.com/mvrl/VectorSynth.
Problem

Research questions and friction points this paper is trying to address.

Generating pixel-accurate satellite images from polygonal geographic annotations
Aligning imagery with semantic vector geometry for fine-grained spatial edits
Enabling interactive workflows combining language prompts with geometry-aware conditioning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion-based framework for satellite image synthesis
Aligns imagery with semantic vector geometry
Vision language module guides pixel-level embeddings
🔎 Similar Papers
No similar papers found.
D
Dan Cher
Washington University in St. Louis
B
Brian Wei
Washington University in St. Louis
S
S. Sastry
Washington University in St. Louis
Nathan Jacobs
Nathan Jacobs
Washington University in St. Louis
computer visionremote sensingmedical imaging