PFDiff: Training-Free Acceleration of Diffusion Models Combining Past and Future Scores

📅 2024-08-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models suffer from low sampling efficiency, primarily due to high numbers of function evaluations (NFEs) and discretization errors inherent in first-order ODE solvers. This paper proposes a training-free timestep-skipping sampling framework. Methodologically, it introduces three key innovations: (1) a “springboard”-inspired mechanism for reusing historical score estimates, enhancing past-score utilization; (2) a Nesterov-inspired forward momentum correction that incorporates predictive future gradients to suppress error accumulation at low NFEs; and (3) orthogonal compatibility with mainstream ODE solvers—including DDIM and PLMS—without architectural or training modifications. Empirically, the method achieves FID=16.46 on ImageNet 64×64 in only four steps (vs. 138.81 for DDIM), and FID=13.06 on Stable Diffusion with ten steps (CFG=7.5), significantly outperforming existing training-free acceleration approaches.

Technology Category

Application Category

📝 Abstract
Diffusion Probabilistic Models (DPMs) have shown remarkable potential in image generation, but their sampling efficiency is hindered by the need for numerous denoising steps. Most existing solutions accelerate the sampling process by proposing fast ODE solvers. However, the inevitable discretization errors of the ODE solvers are significantly magnified when the number of function evaluations (NFE) is fewer. In this work, we propose PFDiff, a novel training-free and orthogonal timestep-skipping strategy, which enables existing fast ODE solvers to operate with fewer NFE. Specifically, PFDiff initially utilizes score replacement from past time steps to predict a ``springboard". Subsequently, it employs this ``springboard"along with foresight updates inspired by Nesterov momentum to rapidly update current intermediate states. This approach effectively reduces unnecessary NFE while correcting for discretization errors inherent in first-order ODE solvers. Experimental results demonstrate that PFDiff exhibits flexible applicability across various pre-trained DPMs, particularly excelling in conditional DPMs and surpassing previous state-of-the-art training-free methods. For instance, using DDIM as a baseline, we achieved 16.46 FID (4 NFE) compared to 138.81 FID with DDIM on ImageNet 64x64 with classifier guidance, and 13.06 FID (10 NFE) on Stable Diffusion with 7.5 guidance scale. Code is available at url{https://github.com/onefly123/PFDiff}.
Problem

Research questions and friction points this paper is trying to address.

Enhances sampling efficiency in diffusion models.
Reduces function evaluations in ODE solvers.
Corrects discretization errors in fast ODE solvers.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free acceleration technique
Combines past and future scores
Reduces function evaluations significantly
🔎 Similar Papers
No similar papers found.
G
Guangyi Wang
School of Informatics, Xiamen University
Y
Yuren Cai
School of Informatics, Xiamen University
Lijiang Li
Lijiang Li
Xiamen University
Diffusion Models
W
Wei Peng
Department of Psychiatry and Behavioral Sciences, Stanford University
S
Songzhi Su
School of Informatics, Xiamen University