IB-GAN: Disentangled Representation Learning with Information Bottleneck Generative Adversarial Networks

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses unsupervised disentangled representation learning by proposing IB-GAN: a generative adversarial network that incorporates a learnable stochastic intermediate latent layer in its generator and embeds the information bottleneck (IB) principle directly into the adversarial objective. This enables end-to-end mutual information regularization between latent variables and generated outputs—without requiring explicit supervision—thereby promoting semantic disentanglement and interpretability of the latent space. On dSprites and Color-dSprites, IB-GAN achieves disentanglement metrics (e.g., DCI, MIG) comparable to β-VAE and significantly surpasses InfoGAN. On CelebA and 3D Chairs, it attains lower FID scores, indicating superior generation fidelity and diversity. The core contribution is the first seamless integration of the information bottleneck mechanism into the GAN framework, jointly optimizing for disentanglement, generation quality, and training stability.

Technology Category

Application Category

📝 Abstract
We propose a new GAN-based unsupervised model for disentangled representation learning. The new model is discovered in an attempt to utilize the Information Bottleneck (IB) framework to the optimization of GAN, thereby named IB-GAN. The architecture of IB-GAN is partially similar to that of InfoGAN but has a critical difference; an intermediate layer of the generator is leveraged to constrain the mutual information between the input and the generated output. The intermediate stochastic layer can serve as a learnable latent distribution that is trained with the generator jointly in an end-to-end fashion. As a result, the generator of IB-GAN can harness the latent space in a disentangled and interpretable manner. With the experiments on dSprites and Color-dSprites dataset, we demonstrate that IB-GAN achieves competitive disentanglement scores to those of state-of-the-art {eta}-VAEs and outperforms InfoGAN. Moreover, the visual quality and the diversity of samples generated by IB-GAN are often better than those by {eta}-VAEs and Info-GAN in terms of FID score on CelebA and 3D Chairs dataset.
Problem

Research questions and friction points this paper is trying to address.

Proposes unsupervised GAN model for disentangled representation learning
Uses information bottleneck to optimize generator's latent space constraints
Achieves competitive disentanglement and better sample quality than benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

IB-GAN integrates Information Bottleneck into GAN optimization
It constrains mutual information via generator intermediate layer
Learns disentangled latent space in end-to-end training
🔎 Similar Papers
No similar papers found.
I
Insu Jeon
Dept. of Computer Science and Engineering, Seoul National University, Republic of Korea (South)
Wonkwang Lee
Wonkwang Lee
PhD Student @ Seoul National University
Computer VisionMachine LearningDeep Learning
M
Myeongjang Pyeon
Dept. of Computer Science and Engineering, Seoul National University, Republic of Korea (South)
Gunhee Kim
Gunhee Kim
Professor, Seoul National University
Computer VisionMachine LearningNatural Language Processing