HP-GAN: Harnessing pretrained networks for GAN improvement with FakeTwins and discriminator consistency.

πŸ“… 2026-01-01
πŸ›οΈ Neural Networks
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitations of generative adversarial networks (GANs) in producing images with sufficient diversity and quality across varying data scales. To this end, the authors propose FakeTwins, a self-supervised loss mechanism, combined with a cross-architecture discriminator consistency strategy. By leveraging a pretrained network as a source of self-supervised signals, the method jointly trains multi-scale feature maps extracted from both CNN and Vision Transformer backbones, effectively integrating their complementary priors to enhance training stability and generalization. Evaluated on 17 diverse datasets spanning different image domains and data scales, the proposed approach consistently outperforms current state-of-the-art methods, achieving substantial improvements in FrΓ©chet Inception Distance (FID) and generating images with markedly enhanced quality and diversity.

Technology Category

Application Category

Problem

Research questions and friction points this paper is trying to address.

Generative Adversarial Networks
pretrained networks
image synthesis
self-supervised learning
discriminator consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

FakeTwins
discriminator consistency
pretrained networks
self-supervised learning
GAN improvement
πŸ”Ž Similar Papers
No similar papers found.
G
Geonhui Son
School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
J
Jeongryong Lee
School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
Dosik Hwang
Dosik Hwang
Professor, Yonsei University
Medical ImagingArtificial IntelligenceDeep Learning