A Non-Adversarial Approach to Idempotent Generative Modelling

📅 2025-11-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Idempotent Generative Networks (IGNs) suffer from mode collapse, mode dropping, and training instability due to their reliance on adversarial training, resulting in incomplete coverage of the data manifold and degraded inpainting and generation quality. To address these limitations, we propose Non-Adversarial Idempotent Generative Networks (NAIGN), which eliminate the discriminator and instead integrate an Implicit Maximum Likelihood Estimation (IMLE) objective with a reconstruction loss. Crucially, NAIGN jointly models the data manifold’s distance field and an energy-based distribution via an idempotent mapping mechanism. This enables implicit learning of the manifold’s geometric structure, yielding stable and comprehensive manifold coverage. Experiments demonstrate that NAIGN significantly outperforms conventional IGNs on image inpainting and generation: generated samples exhibit higher fidelity to real data distributions, achieve more complete mode coverage, and exhibit markedly improved training robustness.

Technology Category

Application Category

📝 Abstract
Idempotent Generative Networks (IGNs) are deep generative models that also function as local data manifold projectors, mapping arbitrary inputs back onto the manifold. They are trained to act as identity operators on the data and as idempotent operators off the data manifold. However, IGNs suffer from mode collapse, mode dropping, and training instability due to their objectives, which contain adversarial components and can cause the model to cover the data manifold only partially -- an issue shared with generative adversarial networks. We introduce Non-Adversarial Idempotent Generative Networks (NAIGNs) to address these issues. Our loss function combines reconstruction with the non-adversarial generative objective of Implicit Maximum Likelihood Estimation (IMLE). This improves on IGN's ability to restore corrupted data and generate new samples that closely match the data distribution. We moreover demonstrate that NAIGNs implicitly learn the distance field to the data manifold, as well as an energy-based model.
Problem

Research questions and friction points this paper is trying to address.

Address mode collapse and training instability in generative models
Improve data manifold coverage and sample generation quality
Enable implicit learning of distance fields and energy models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Non-adversarial training using IMLE objective
Combining reconstruction with generative modeling
Implicitly learning manifold distance and energy model