🤖 AI Summary
Existing GAN-based semantic image synthesis methods suffer from inherent trade-offs between generation quality and diversity. To address this, we propose the first semantic image synthesis framework built upon denoising diffusion probabilistic models (DDPMs). Our method fundamentally decouples two key inputs: noisy images are fed into the U-Net encoder, while semantic layouts guide a dedicated decoder path via multi-level Spatially-Adaptive Denormalization (SPADE). Crucially, we introduce classifier-free guidance—the first such application in semantic diffusion synthesis—to substantially improve layout-to-pixel alignment. Evaluated on four standard benchmarks—Cityscapes, ADE20K, COCO-Stuff, and Mapillary Vistas—our approach achieves state-of-the-art performance: FID of 14.3 and LPIPS of 0.52, demonstrating significant gains in both visual fidelity and semantic consistency.
📝 Abstract
Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks compared with Generative Adversarial Nets (GANs). Recent work on semantic image synthesis mainly follows the de facto GAN-based approaches, which may lead to unsatisfactory quality or diversity of generated images. In this paper, we propose a novel framework based on DDPM for semantic image synthesis. Unlike previous conditional diffusion model directly feeds the semantic layout and noisy image as input to a U-Net structure, which may not fully leverage the information in the input semantic mask, our framework processes semantic layout and noisy image differently. It feeds noisy image to the encoder of the U-Net structure while the semantic layout to the decoder by multi-layer spatially-adaptive normalization operators. To further improve the generation quality and semantic interpretability in semantic image synthesis, we introduce the classifier-free guidance sampling strategy, which acknowledge the scores of an unconditional model for sampling process. Extensive experiments on four benchmark datasets demonstrate the effectiveness of our proposed method, achieving state-of-the-art performance in terms of fidelity (FID) and diversity (LPIPS). Our code and pretrained models are available at https://github.com/WeilunWang/semantic-diffusion-model.