Distribution Matching Distillation Meets Reinforcement Learning

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitation that few-step diffusion models suffer from degraded generation quality due to reliance on multi-step teacher models, this paper proposes the Distribution-Matching Distillation with Reinforcement learning (DMDR) framework. DMDR tightly integrates distribution-matching distillation (DMD) with reinforcement learning (RL): it employs a distribution-matching loss as a regularization constraint, introduces a dynamic distribution guidance mechanism to improve mode coverage, and designs a heavy-noise sampling strategy to enhance training stability. As an end-to-end approach, DMDR transcends the performance ceiling of conventional knowledge distillation. The resulting few-step student model surpasses its multi-step teacher in both visual fidelity and prompt alignment, achieving state-of-the-art results across major benchmarks.

Technology Category

Application Category

📝 Abstract
Distribution Matching Distillation (DMD) distills a pre-trained multi-step diffusion model to a few-step one to improve inference efficiency. However, the performance of the latter is often capped by the former. To circumvent this dilemma, we propose DMDR, a novel framework that combines Reinforcement Learning (RL) techniques into the distillation process. We show that for the RL of the few-step generator, the DMD loss itself is a more effective regularization compared to the traditional ones. In turn, RL can help to guide the mode coverage process in DMD more effectively. These allow us to unlock the capacity of the few-step generator by conducting distillation and RL simultaneously. Meanwhile, we design the dynamic distribution guidance and dynamic renoise sampling training strategies to improve the initial distillation process. The experiments demonstrate that DMDR can achieve leading visual quality, prompt coherence among few-step methods, and even exhibit performance that exceeds the multi-step teacher.
Problem

Research questions and friction points this paper is trying to address.

Improving inference efficiency of diffusion models through distillation
Overcoming performance limitations in few-step diffusion generators
Enhancing mode coverage via reinforcement learning integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines reinforcement learning with diffusion model distillation
Uses DMD loss as regularization for RL training
Implements dynamic sampling strategies to enhance distillation
🔎 Similar Papers
No similar papers found.
Dengyang Jiang
Dengyang Jiang
Northwestern Polytechnical University
Computer VisionDeep LearningMachine Learning
Dongyang Liu
Dongyang Liu
MMLab CUHK
Image/Video GenerationLLMsVLMs
Z
Zanyi Wang
Zhejiang University of Technology
Q
Qilong Wu
Alibaba Group
X
Xin Jin
Alibaba Group
D
David Liu
The Chinese University of Hong Kong
Z
Zhen Li
Alibaba Group
M
Mengmeng Wang
Alibaba Group
P
Peng Gao
Alibaba Group
Harry Yang
Harry Yang
HKUST
computer visionmachine learning