One-step Diffusion Models with $f$-Divergence Distribution Matching

📅 2025-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the slow sampling speed and poor single-step generation quality of diffusion models, this paper proposes *f-distill*—the first unified framework for single-step diffusion distillation based on generalized *f*-divergence minimization. Unlike conventional reverse KL divergence matching, our theoretical analysis reveals that the gradient of any *f*-divergence equals the weighted product of the score difference and the density ratio, showing reverse KL as merely a special case; we further identify Jensen–Shannon and forward KL divergences as superior for distribution matching. Methodologically, *f-distill* integrates variational score distillation, density-ratio-weighted gradient estimation, and joint optimization of the single-step generator. Experiments demonstrate state-of-the-art single-step image generation on ImageNet64; leading zero-shot text-to-image synthesis performance on MS-COCO; and significantly improved mode coverage and training stability.

Technology Category

Application Category

📝 Abstract
Sampling from diffusion models involves a slow iterative process that hinders their practical deployment, especially for interactive applications. To accelerate generation speed, recent approaches distill a multi-step diffusion model into a single-step student generator via variational score distillation, which matches the distribution of samples generated by the student to the teacher's distribution. However, these approaches use the reverse Kullback-Leibler (KL) divergence for distribution matching which is known to be mode seeking. In this paper, we generalize the distribution matching approach using a novel $f$-divergence minimization framework, termed $f$-distill, that covers different divergences with different trade-offs in terms of mode coverage and training variance. We derive the gradient of the $f$-divergence between the teacher and student distributions and show that it is expressed as the product of their score differences and a weighting function determined by their density ratio. This weighting function naturally emphasizes samples with higher density in the teacher distribution, when using a less mode-seeking divergence. We observe that the popular variational score distillation approach using the reverse-KL divergence is a special case within our framework. Empirically, we demonstrate that alternative $f$-divergences, such as forward-KL and Jensen-Shannon divergences, outperform the current best variational score distillation methods across image generation tasks. In particular, when using Jensen-Shannon divergence, $f$-distill achieves current state-of-the-art one-step generation performance on ImageNet64 and zero-shot text-to-image generation on MS-COCO. Project page: https://research.nvidia.com/labs/genair/f-distill
Problem

Research questions and friction points this paper is trying to address.

Accelerate diffusion model generation speed
Generalize distribution matching with f-divergence
Achieve state-of-the-art one-step generation performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

One-step diffusion model
f-divergence minimization framework
Jensen-Shannon divergence outperforms
🔎 Similar Papers
No similar papers found.