BOLT-GAN: Bayes-Optimal Loss for Stable GAN Training

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the training instability of Wasserstein GANs (WGANs) and the inherent limitations of the Wasserstein distance as a probability metric, this paper proposes BOLT-GAN—a novel GAN framework that, for the first time, integrates Bayesian optimal decision theory into loss design. Under the Lipschitz continuity constraint on the discriminator, BOLT-GAN implicitly minimizes a new probabilistic divergence strictly superior to the Wasserstein distance. Crucially, it achieves stable training solely by reformulating the discriminator’s loss function—requiring no additional regularization, architectural modifications, or hyperparameter tuning. Extensive experiments on standard benchmarks (CIFAR-10 and CelebA-64) demonstrate significant improvements: Fréchet Inception Distance (FID) drops by 10–60% compared to WGAN and its variants, while generation quality and training robustness are consistently enhanced. These results validate the effectiveness and generalizability of Bayesian optimal learning thresholds in designing principled, metric-driven adversarial learning objectives.

Technology Category

Application Category

📝 Abstract
We introduce BOLT-GAN, a simple yet effective modification of the WGAN framework inspired by the Bayes Optimal Learning Threshold (BOLT). We show that with a Lipschitz continuous discriminator, BOLT-GAN implicitly minimizes a different metric distance than the Earth Mover (Wasserstein) distance and achieves better training stability. Empirical evaluations on four standard image generation benchmarks (CIFAR-10, CelebA-64, LSUN Bedroom-64, and LSUN Church-64) show that BOLT-GAN consistently outperforms WGAN, achieving 10-60% lower Frechet Inception Distance (FID). Our results suggest that BOLT is a broadly applicable principle for enhancing GAN training.
Problem

Research questions and friction points this paper is trying to address.

Improving GAN training stability with Bayes-Optimal Loss
Modifying WGAN framework to minimize alternative metric distance
Achieving consistent performance gains across image generation benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modifies WGAN framework using Bayes Optimal Learning Threshold
Implicitly minimizes different metric distance than Wasserstein
Achieves better training stability and lower FID scores
🔎 Similar Papers
No similar papers found.