Adversarial Distribution Matching for Diffusion Distillation Towards Efficient Image and Video Synthesis

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing distribution matching distillation (DMD) methods rely on reverse KL divergence minimization, which suffers from mode collapse and limits compression efficacy for both one-step and multi-step diffusion models. To address this, we propose an adversarial distribution matching distillation framework: a diffusion-based discriminator is designed to jointly align score estimates of real and generated data in both latent and pixel spaces; KL optimization is replaced by an adversarial mechanism to ensure full support coverage of the target distribution; and an ODE-sampled distribution loss is integrated with multi-space discrimination for joint optimization. On SDXL, our one-step distilled model achieves superior FID over DMD2 with significant inference speedup. For multi-step distillation, our method attains state-of-the-art performance on SD3-Medium, SD3.5-Large, and CogVideoX—demonstrating its generality and superiority for efficient image and video synthesis.

Technology Category

Application Category

📝 Abstract
Distribution Matching Distillation (DMD) is a promising score distillation technique that compresses pre-trained teacher diffusion models into efficient one-step or multi-step student generators. Nevertheless, its reliance on the reverse Kullback-Leibler (KL) divergence minimization potentially induces mode collapse (or mode-seeking) in certain applications. To circumvent this inherent drawback, we propose Adversarial Distribution Matching (ADM), a novel framework that leverages diffusion-based discriminators to align the latent predictions between real and fake score estimators for score distillation in an adversarial manner. In the context of extremely challenging one-step distillation, we further improve the pre-trained generator by adversarial distillation with hybrid discriminators in both latent and pixel spaces. Different from the mean squared error used in DMD2 pre-training, our method incorporates the distributional loss on ODE pairs collected from the teacher model, and thus providing a better initialization for score distillation fine-tuning in the next stage. By combining the adversarial distillation pre-training with ADM fine-tuning into a unified pipeline termed DMDX, our proposed method achieves superior one-step performance on SDXL compared to DMD2 while consuming less GPU time. Additional experiments that apply multi-step ADM distillation on SD3-Medium, SD3.5-Large, and CogVideoX set a new benchmark towards efficient image and video synthesis.
Problem

Research questions and friction points this paper is trying to address.

Address mode collapse in diffusion distillation via adversarial alignment
Improve one-step distillation with hybrid latent-pixel discriminators
Enhance pre-training using distributional loss for better initialization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial Distribution Matching for score distillation
Hybrid discriminators in latent and pixel spaces
Distributional loss on ODE pairs for initialization
🔎 Similar Papers
Yanzuo Lu
Yanzuo Lu
Imperial College London
AI Generated ContentDiffusion Model Distillation
Y
Yuxi Ren
ByteDance Seed Vision
X
Xin Xia
ByteDance Seed Vision
Shanchuan Lin
Shanchuan Lin
ByteDance
Computer Science
X
Xing Wang
ByteDance Seed Vision
Xuefeng Xiao
Xuefeng Xiao
ByteDance Seed
Computer VisionEfficient AI
A
Andy J. Ma
Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Information Security Technology, Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, Pazhou Lab (HuangPu), Guangzhou, China
X
Xiaohua Xie
Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Information Security Technology, Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, Pazhou Lab (HuangPu), Guangzhou, China
J
Jian-Huang Lai
Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Information Security Technology, Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, Pazhou Lab (HuangPu), Guangzhou, China