Improving Discriminator Guidance in Diffusion Models

📅 2025-03-20
📈 Citations: 0
✹ Influential: 0
📄 PDF
đŸ€– AI Summary
Standard discriminator guidance employs cross-entropy loss, yet theoretical analysis reveals it increases the KL divergence between the model and data distributions—especially under discriminator overfitting—degrading generation quality. Method: We propose the first theoretically guaranteed discriminator guidance framework converging to the true data distribution: we reformulate the discriminator objective to directly minimize KL divergence, ensuring that Score-Matching diffusion models are guided toward the target distribution with strict theoretical guarantees. Contribution/Results: Through rigorous convergence analysis and extensive experiments across multiple datasets, our method consistently suppresses distributional shift, improving FID by 12–28% across all benchmarks. It significantly enhances sample fidelity and training robustness, establishing a principled foundation for reliable, high-quality diffusion-based generation.

Technology Category

Application Category

📝 Abstract
Discriminator Guidance has become a popular method for efficiently refining pre-trained Score-Matching Diffusion models. However, in this paper, we demonstrate that the standard implementation of this technique does not necessarily lead to a distribution closer to the real data distribution. Specifically, we show that training the discriminator using Cross-Entropy loss, as commonly done, can in fact increase the Kullback-Leibler divergence between the model and target distributions, particularly when the discriminator overfits. To address this, we propose a theoretically sound training objective for discriminator guidance that properly minimizes the KL divergence. We analyze its properties and demonstrate empirically across multiple datasets that our proposed method consistently improves over the conventional method by producing samples of higher quality.
Problem

Research questions and friction points this paper is trying to address.

Standard discriminator guidance increases KL divergence.
Cross-Entropy loss can cause discriminator overfitting.
Proposed method minimizes KL divergence, improves sample quality.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes new training objective for discriminator guidance
Minimizes KL divergence effectively
Improves sample quality across datasets
🔎 Similar Papers
No similar papers found.
A
Alexandre Verine
École Normale SupĂ©rieure Paris, PSL University, Paris, France
M
Mehdi Inane
Mila - Quebec AI Institute, Université de Montréal, Montreal, Canada
Florian Le Bronnec
Florian Le Bronnec
MILES, LAMSADE, Université Paris-Dauphine, Université PSL
B
Benjamin Négrevergne
LAMSADE, CNRS, Université Paris-Dauphine-PSL, Paris, France
Y
Y. Chevaleyre
LAMSADE, CNRS, Université Paris-Dauphine-PSL, Paris, France