Fast-DDPM: Fast Denoising Diffusion Probabilistic Models for Medical Image-to-Image Generation

📅 2024-05-23
🏛️ arXiv.org
📈 Citations: 18
Influential: 1
📄 PDF
🤖 AI Summary
To address the prohibitively slow training (days to weeks) and sampling (minutes to hours) of Denoising Diffusion Probabilistic Models (DDPMs) on high-dimensional 3D/4D medical imaging data—largely due to their thousand-step diffusion process—this work proposes an ultra-efficient 10-step DDPM framework. Our method introduces three key innovations: (i) a novel lightweight dual-mode noise scheduler supporting both uniform and non-uniform timesteps; (ii) a timestep-aligned training strategy to stabilize optimization; and (iii) a 3D U-Net backbone jointly optimized for super-resolution, denoising, and cross-modality translation. Evaluated across three medical image generation tasks, our approach accelerates training by 5× (reducing time to 0.2× baseline) and single-sample sampling by 100× (to just 0.01×), while consistently outperforming DDPM, CNN-, and GAN-based baselines—and even surpassing state-of-the-art methods—in FID, PSNR, and perceptual quality. Code is publicly available.

Technology Category

Application Category

📝 Abstract
Denoising diffusion probabilistic models (DDPMs) have achieved unprecedented success in computer vision. However, they remain underutilized in medical imaging, a field crucial for disease diagnosis and treatment planning. This is primarily due to the high computational cost associated with (1) the use of large number of time steps (e.g., 1,000) in diffusion processes and (2) the increased dimensionality of medical images, which are often 3D or 4D. Training a diffusion model on medical images typically takes days to weeks, while sampling each image volume takes minutes to hours. To address this challenge, we introduce Fast-DDPM, a simple yet effective approach capable of improving training speed, sampling speed, and generation quality simultaneously. Unlike DDPM, which trains the image denoiser across 1,000 time steps, Fast-DDPM trains and samples using only 10 time steps. The key to our method lies in aligning the training and sampling procedures to optimize time-step utilization. Specifically, we introduced two efficient noise schedulers with 10 time steps: one with uniform time step sampling and another with non-uniform sampling. We evaluated Fast-DDPM across three medical image-to-image generation tasks: multi-image super-resolution, image denoising, and image-to-image translation. Fast-DDPM outperformed DDPM and current state-of-the-art methods based on convolutional networks and generative adversarial networks in all tasks. Additionally, Fast-DDPM reduced the training time to 0.2x and the sampling time to 0.01x compared to DDPM. Our code is publicly available at: https://github.com/mirthAI/Fast-DDPM.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational cost of diffusion models in medical imaging
Accelerating training and sampling for medical image generation
Improving generation quality with efficient time-step utilization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reduces time steps to 10
Introduces efficient noise schedulers
Aligns training and sampling procedures
🔎 Similar Papers
No similar papers found.