🤖 AI Summary
To address the training instability of Wasserstein GANs (WGANs) and the inherent limitations of the Wasserstein distance as a probability metric, this paper proposes BOLT-GAN—a novel GAN framework that, for the first time, integrates Bayesian optimal decision theory into loss design. Under the Lipschitz continuity constraint on the discriminator, BOLT-GAN implicitly minimizes a new probabilistic divergence strictly superior to the Wasserstein distance. Crucially, it achieves stable training solely by reformulating the discriminator’s loss function—requiring no additional regularization, architectural modifications, or hyperparameter tuning. Extensive experiments on standard benchmarks (CIFAR-10 and CelebA-64) demonstrate significant improvements: Fréchet Inception Distance (FID) drops by 10–60% compared to WGAN and its variants, while generation quality and training robustness are consistently enhanced. These results validate the effectiveness and generalizability of Bayesian optimal learning thresholds in designing principled, metric-driven adversarial learning objectives.
📝 Abstract
We introduce BOLT-GAN, a simple yet effective modification of the WGAN framework inspired by the Bayes Optimal Learning Threshold (BOLT). We show that with a Lipschitz continuous discriminator, BOLT-GAN implicitly minimizes a different metric distance than the Earth Mover (Wasserstein) distance and achieves better training stability. Empirical evaluations on four standard image generation benchmarks (CIFAR-10, CelebA-64, LSUN Bedroom-64, and LSUN Church-64) show that BOLT-GAN consistently outperforms WGAN, achieving 10-60% lower Frechet Inception Distance (FID). Our results suggest that BOLT is a broadly applicable principle for enhancing GAN training.