Seasoning Generative Models for a Generalization Aftertaste

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of theoretical guarantees for generalization in existing discriminator-guided generative models. Building upon the strong duality of f-divergences, we propose a universal discriminator-guided refinement framework that enhances the generalization capability of any generative model, including diffusion models. We provide the first theoretical proof that this guidance mechanism provably reduces the generalization gap and establish a quantitative relationship between the gap reduction and the Rademacher complexity of the discriminator class. Furthermore, our framework offers a unified theoretical explanation for the empirical success of recent score-based diffusion methods. While maintaining broad applicability, the proposed approach delivers rigorous theoretical justification for widely used yet previously heuristic refinement strategies.

Technology Category

Application Category

📝 Abstract
The use of discriminators to train or fine-tune generative models has proven to be a rather successful framework. A notable example is Generative Adversarial Networks (GANs) that minimize a loss incurred by training discriminators along with other paradigms that boost generative models via discriminators that satisfy weak learner constraints. More recently, even diffusion models have shown advantages with some kind of discriminator guidance. In this work, we extend a strong-duality result related to $f$-divergences which gives rise to a discriminator-guided recipe that allows us to \textit{refine} any generative model. We then show that the refined generative models provably improve generalization, compared to its non-refined counterpart. In particular, our analysis reveals that the gap in generalization is improved based on the Rademacher complexity of the discriminator set used for refinement. Our recipe subsumes a recently introduced score-based diffusion approach (Kim et al., 2022) that has shown great empirical success, however allows us to shed light on the generalization guarantees of this method by virtue of our analysis. Thus, our work provides a theoretical validation for existing work, suggests avenues for new algorithms, and contributes to our understanding of generalization in generative models at large.
Problem

Research questions and friction points this paper is trying to address.

generalization
generative models
discriminator guidance
f-divergences
Rademacher complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

discriminator-guided refinement
f-divergence duality
generalization guarantee
Rademacher complexity
generative model refinement
🔎 Similar Papers
No similar papers found.