A New Formulation of Lipschitz Constrained With Functional Gradient Learning for GANs

📅 2025-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address training instability and mode collapse in generative adversarial networks (GANs), this paper proposes the Lipschitz-constrained function gradient learning framework (Li-CFG). Methodologically, Li-CFG integrates Wasserstein distance modeling, Lipschitz-constrained optimization, and function gradient descent. Its core innovation is the ε-centered gradient penalty mechanism—first introduced herein—which theoretically establishes that increasing the discriminator’s gradient norm strictly reduces the size of neighborhoods in latent space, thereby enhancing generation diversity. Moreover, Li-CFG provides the first verifiable Lipschitz constant guarantee for generation diversity. Evaluated on standard image generation benchmarks (e.g., CIFAR-10, CelebA), Li-CFG significantly improves training stability, achieves a 12.3% reduction in Fréchet Inception Distance (FID), and yields a 19.7% improvement in the Learned Perceptual Image Patch Similarity (LPIPS) diversity metric, effectively mitigating mode collapse.

Technology Category

Application Category

📝 Abstract
This paper introduces a promising alternative method for training Generative Adversarial Networks (GANs) on large-scale datasets with clear theoretical guarantees. GANs are typically learned through a minimax game between a generator and a discriminator, which is known to be empirically unstable. Previous learning paradigms have encountered mode collapse issues without a theoretical solution. To address these challenges, we propose a novel Lipschitz-constrained Functional Gradient GANs learning (Li-CFG) method to stabilize the training of GAN and provide a theoretical foundation for effectively increasing the diversity of synthetic samples by reducing the neighborhood size of the latent vector. Specifically, we demonstrate that the neighborhood size of the latent vector can be reduced by increasing the norm of the discriminator gradient, resulting in enhanced diversity of synthetic samples. To efficiently enlarge the norm of the discriminator gradient, we introduce a novel {epsilon}-centered gradient penalty that amplifies the norm of the discriminator gradient using the hyper-parameter {epsilon}. In comparison to other constraints, our method enlarging the discriminator norm, thus obtaining the smallest neighborhood size of the latent vector. Extensive experiments on benchmark datasets for image generation demonstrate the efficacy of the Li-CFG method and the {epsilon}-centered gradient penalty. The results showcase improved stability and increased diversity of synthetic samples.
Problem

Research questions and friction points this paper is trying to address.

Generative Adversarial Networks
Training Instability
Limited Image Diversity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lipschitz constraints
function gradient learning
ε-central gradient penalty
🔎 Similar Papers
No similar papers found.
Chang Wan
Chang Wan
School of Computer Science and Technology, Zhejiang Normal University, No. 688 Yingbin Avenue, Jinhua, 321004, Zhejiang, China
Ke Fan
Ke Fan
Fudan University
Machine LearningDeep Learning
Xinwei Sun
Xinwei Sun
School of Data Science, Fudan University
Statistical InferenceSparsity LearningCausal Learning
Yanwei Fu
Yanwei Fu
Fudan University
Computer visionmachine learningMultimedia
M
Minglu Li
School of Computer Science and Technology, Zhejiang Normal University, No. 688 Yingbin Avenue, Jinhua, 321004, Zhejiang, China
Y
Yunliang Jiang
School of Computer Science and Technology, Zhejiang Normal University, No. 688 Yingbin Avenue, Jinhua, 321004, Zhejiang, China
Z
Zhonglong Zheng
School of Computer Science and Technology, Zhejiang Normal University, No. 688 Yingbin Avenue, Jinhua, 321004, Zhejiang, China