A Framework for Low-Effort Training Data Generation for Urban Semantic Segmentation

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Severe domain shift between synthetic data and real-world urban scenes (e.g., Cityscapes), coupled with high costs of high-fidelity 3D modeling, hinders effective transfer learning. Method: We propose a lightweight diffusion-based domain adaptation framework that leverages only low-cost, coarse synthetic images and their imperfect pseudo-labels. It employs pseudo-label supervision, generated-sample filtering, image–label alignment optimization, and cross-dataset semantic consistency normalization to achieve high-fidelity semantic-to-photorealistic image translation. Contribution/Results: Without requiring precise 3D modeling or manual annotation, our method bridges the performance gap between simple synthetic data and meticulously designed ground-truth datasets. Evaluated on five synthetic and two real-world benchmarks, it improves semantic segmentation mIoU by up to +8.0 percentage points over state-of-the-art domain adaptation methods—establishing a new paradigm for cost-effective, scalable urban scene understanding.

Technology Category

Application Category

📝 Abstract
Synthetic datasets are widely used for training urban scene recognition models, but even highly realistic renderings show a noticeable gap to real imagery. This gap is particularly pronounced when adapting to a specific target domain, such as Cityscapes, where differences in architecture, vegetation, object appearance, and camera characteristics limit downstream performance. Closing this gap with more detailed 3D modelling would require expensive asset and scene design, defeating the purpose of low-cost labelled data. To address this, we present a new framework that adapts an off-the-shelf diffusion model to a target domain using only imperfect pseudo-labels. Once trained, it generates high-fidelity, target-aligned images from semantic maps of any synthetic dataset, including low-effort sources created in hours rather than months. The method filters suboptimal generations, rectifies image-label misalignments, and standardises semantics across datasets, transforming weak synthetic data into competitive real-domain training sets. Experiments on five synthetic datasets and two real target datasets show segmentation gains of up to +8.0%pt. mIoU over state-of-the-art translation methods, making rapidly constructed synthetic datasets as effective as high-effort, time-intensive synthetic datasets requiring extensive manual design. This work highlights a valuable collaborative paradigm where fast semantic prototyping, combined with generative models, enables scalable, high-quality training data creation for urban scene understanding.
Problem

Research questions and friction points this paper is trying to address.

Bridging domain gap between synthetic and real urban imagery
Reducing manual effort in generating realistic training datasets
Improving semantic segmentation performance with domain-aligned data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapts diffusion model using imperfect pseudo-labels
Generates target-aligned images from semantic maps
Filters suboptimal generations and rectifies misalignments
🔎 Similar Papers
No similar papers found.
D
Denis Zavadski
Computer Vision and Learning Lab, IWR, Heidelberg University, Germany
D
Damjan Kalšan
Computer Vision and Learning Lab, IWR, Heidelberg University, Germany
T
Tim Küchler
Computer Vision and Learning Lab, IWR, Heidelberg University, Germany
H
Haebom Lee
AIMMO, Republic of Korea
Stefan Roth
Stefan Roth
Professor of Computer Science, TU Darmstadt
Computer VisionMachine Learning
Carsten Rother
Carsten Rother
Professor Uni Heidelberg / Germany
computer visionmachine learning