Geodesic Diffusion Models for Medical Image-to-Image Generation

📅 2025-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Standard diffusion models propagate data distributions along non-geodesic paths in probability space, necessitating hundreds of time steps for training and sampling—leading to low computational efficiency. To address this, we propose Geodesic Diffusion Models (GDM), the first framework that explicitly models the shortest path—the Fisher–Rao geodesic—from the data distribution to a Gaussian prior. By integrating variance-exploding (VE) noise scheduling within a continuous-time formulation, GDM achieves minimum-energy transport. Our method enables high-fidelity generation with only 15 sampling steps, drastically reducing computational overhead. On CT denoising and MRI super-resolution, GDM achieves state-of-the-art performance: it trains 50× faster than DDPM and 10× faster than Fast-DDPM; sampling is 66× faster than DDPM and on par with Fast-DDPM. The core innovation lies in incorporating Fisher–Rao geodesics into diffusion modeling—unifying efficiency gains with improved generative quality.

Technology Category

Application Category

📝 Abstract
Diffusion models transform an unknown data distribution into a Gaussian prior by progressively adding noise until the data become indistinguishable from pure noise. This stochastic process traces a path in probability space, evolving from the original data distribution (considered as a Gaussian with near-zero variance) to an isotropic Gaussian. The denoiser then learns to reverse this process, generating high-quality samples from random Gaussian noise. However, standard diffusion models, such as the Denoising Diffusion Probabilistic Model (DDPM), do not ensure a geodesic (i.e., shortest) path in probability space. This inefficiency necessitates the use of many intermediate time steps, leading to high computational costs in training and sampling. To address this limitation, we propose the Geodesic Diffusion Model (GDM), which defines a geodesic path under the Fisher-Rao metric with a variance-exploding noise scheduler. This formulation transforms the data distribution into a Gaussian prior with minimal energy, significantly improving the efficiency of diffusion models. We trained GDM by continuously sampling time steps from 0 to 1 and using as few as 15 evenly spaced time steps for model sampling. We evaluated GDM on two medical image-to-image generation tasks: CT image denoising and MRI image super-resolution. Experimental results show that GDM achieved state-of-the-art performance while reducing training time by a 50-fold compared to DDPM and 10-fold compared to Fast-DDPM, with 66 times faster sampling than DDPM and a similar sampling speed to Fast-DDPM. These efficiency gains enable rapid model exploration and real-time clinical applications. Our code is publicly available at: https://github.com/mirthAI/GDM-VE.
Problem

Research questions and friction points this paper is trying to address.

Standard diffusion models lack geodesic path efficiency.
High computational costs in training and sampling.
Proposed Geodesic Diffusion Model improves efficiency significantly.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Geodesic Diffusion Model with Fisher-Rao metric
Variance-exploding noise scheduler for efficiency
Reduced time steps for faster training and sampling
🔎 Similar Papers
No similar papers found.
T
Teng Zhang
Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, USA
Hongxu Jiang
Hongxu Jiang
University of Florida
Generative AIMedical ImagingDeep Learning
Kuang Gong
Kuang Gong
Assistant Professor of Biomedical Engineering, University of Florida
PETMRICTInverse ProblemMachine Learning
W
Wei Shao
Department of Medicine, University of Florida, Gainesville, FL, USA