Faster Diffusion Models via Higher-Order Approximation

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of efficient sampling from diffusion models without additional training—specifically, how to provably accelerate convergence to the target distribution at a theoretically guaranteed rate, without relying on strong assumptions such as smoothness or log-concavity. We propose a training-free, high-order probabilistic flow ODE solver that approximates the integration path via Lagrange interpolation and continuous refinement, requiring only an accurate score function estimate. Theoretically, under exact score evaluation, our method achieves ε total variation distance in only $O(d^{1+2/K} varepsilon^{-1/K})$ score evaluations; its error degrades smoothly with score estimation bias. Unlike prior approaches, ours is the first to attain super-convergent acceleration—i.e., convergence order exceeding one—without distributional assumptions, thereby bridging theoretical rigor with practical robustness.

Technology Category

Application Category

📝 Abstract
In this paper, we explore provable acceleration of diffusion models without any additional retraining. Focusing on the task of approximating a target data distribution in $mathbb{R}^d$ to within $varepsilon$ total-variation distance, we propose a principled, training-free sampling algorithm that requires only the order of $$ d^{1+2/K} varepsilon^{-1/K} $$ score function evaluations (up to log factor) in the presence of accurate scores, where $K$ is an arbitrarily large fixed integer. This result applies to a broad class of target data distributions, without the need for assumptions such as smoothness or log-concavity. Our theory is robust vis-a-vis inexact score estimation, degrading gracefully as the score estimation error increases -- without demanding higher-order smoothness on the score estimates as assumed in previous work. The proposed algorithm draws insight from high-order ODE solvers, leveraging high-order Lagrange interpolation and successive refinement to approximate the integral derived from the probability flow ODE.
Problem

Research questions and friction points this paper is trying to address.

Accelerating diffusion models without retraining
Approximating data distribution with fewer evaluations
Robust theory for inexact score estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Higher-order ODE solvers for diffusion acceleration
Training-free sampling with Lagrange interpolation
Robust to inexact score estimation
🔎 Similar Papers
No similar papers found.
G
Gen Li
Department of Statistics, Chinese University of Hong Kong.
Y
Yuchen Zhou
Department of Statistics, University of Illinois Urbana-Champaign.
Yuting Wei
Yuting Wei
Statistics and Data Science at Wharton, University of Pennsylvania
High dimensional statisticsnonparametric statisticsreinforcement learningdiffusion models
Y
Yuxin Chen
Department of Statistics, University of Illinois Urbana-Champaign.