Generalized Dual Discriminator GANs

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address mode collapse in generative adversarial networks (GANs), this paper proposes the dual-discriminator α-GAN and establishes a generalized dual-discriminator GAN framework based on arbitrary convex functions defined on the positive reals. Theoretically, we prove that its optimization objective is equivalent to a weighted sum of an $f$-divergence and its inverse, thereby embedding the α-loss into the dual-discriminator architecture for the first time. This unifies and substantially extends the theoretical foundations of existing dual-discriminator approaches. Training proceeds via min-max collaborative optimization with tunable loss functions, ensuring stability. Experiments on 2D synthetic data demonstrate that the proposed method significantly improves training stability and generation diversity, consistently outperforming baseline models across multiple quantitative metrics.

Technology Category

Application Category

📝 Abstract
Dual discriminator generative adversarial networks (D2 GANs) were introduced to mitigate the problem of mode collapse in generative adversarial networks. In D2 GANs, two discriminators are employed alongside a generator: one discriminator rewards high scores for samples from the true data distribution, while the other favors samples from the generator. In this work, we first introduce dual discriminator $α$-GANs (D2 $α$-GANs), which combines the strengths of dual discriminators with the flexibility of a tunable loss function, $α$-loss. We further generalize this approach to arbitrary functions defined on positive reals, leading to a broader class of models we refer to as generalized dual discriminator generative adversarial networks. For each of these proposed models, we provide theoretical analysis and show that the associated min-max optimization reduces to the minimization of a linear combination of an $f$-divergence and a reverse $f$-divergence. This generalizes the known simplification for D2-GANs, where the objective reduces to a linear combination of the KL-divergence and the reverse KL-divergence. Finally, we perform experiments on 2D synthetic data and use multiple performance metrics to capture various advantages of our GANs.
Problem

Research questions and friction points this paper is trying to address.

Mitigating mode collapse in GANs using dual discriminators
Combining dual discriminators with tunable α-loss function
Generalizing dual discriminator approach to arbitrary f-divergences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual discriminator GANs prevent mode collapse
Tunable α-loss enhances flexibility in D2 α-GANs
Generalized models minimize f-divergence combinations
🔎 Similar Papers
No similar papers found.
P
Penukonda Naga Chandana
International Institute of Information Technology, Hyderabad
T
Tejas Srivastava
International Institute of Information Technology, Hyderabad
Gowtham R. Kurri
Gowtham R. Kurri
International Institute of Information Technology, Hyderabad, India
Statistical Machine LearningInformation Theory
V
V. Lalitha
International Institute of Information Technology, Hyderabad