GANs Secretly Perform Approximate Bayesian Model Selection

📅 2025-07-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the well-known instability, overfitting, and poor generalization in GAN optimization. Methodologically, it reformulates adversarial training as approximate marginal likelihood optimization—revealing its implicit role as approximate Bayesian model selection. Specifically, the GAN generator is interpreted as a Bayesian neural network with stochastic weights; Occam’s razor is formally incorporated to guide regularization, establishing a theoretical connection between flat minima and minimum description length. A unified loss-reshaping framework is proposed that jointly integrates variational inference objectives with adversarial learning. Empirically, the approach yields significantly smoother loss landscapes, mitigates overfitting, and consistently improves generative quality, training stability, and out-of-distribution generalization across diverse GAN architectures—including DCGAN, StyleGAN2, and BigGAN.

Technology Category

Application Category

📝 Abstract
Generative Adversarial Networks (GANs) are popular and successful generative models. Despite their success, optimization is notoriously challenging and they require regularization against overfitting. In this work, we explain the success and limitations of GANs by interpreting them as probabilistic generative models. This interpretation enables us to view GANs as Bayesian neural networks with partial stochasticity, allowing us to establish conditions of universal approximation. We can then cast the adversarial-style optimization of several variants of GANs as the optimization of a proxy for the marginal likelihood. Taking advantage of the connection between marginal likelihood optimization and Occam's razor, we can define regularization and optimization strategies to smooth the loss landscape and search for solutions with minimum description length, which are associated with flat minima and good generalization. The results on a wide range of experiments indicate that these strategies lead to performance improvements and pave the way to a deeper understanding of regularization strategies for GANs.
Problem

Research questions and friction points this paper is trying to address.

Explains GANs as Bayesian models with partial stochasticity
Links GAN optimization to marginal likelihood proxy
Proposes regularization strategies for better GAN performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

GANs as Bayesian neural networks
Adversarial optimization as marginal likelihood proxy
Regularization via minimum description length
🔎 Similar Papers
No similar papers found.