🤖 AI Summary
To address the training instability and mode collapse inherent in generative adversarial networks (GANs) for distribution matching, this paper proposes a novel method based on generalized consistency modeling. It is the first to introduce consistency modeling into distribution matching, replacing GAN’s bilevel min-max optimization with a single-level norm minimization objective. The approach integrates continuous normalizing flows to ensure optimization stability and support flexible constraint specification. Technically, it unifies gradient-matching loss, latent-variable modeling, and domain-translation mechanisms, accompanied by theoretical convergence analysis. Experiments on synthetic and multiple real-world benchmarks demonstrate substantial improvements in both training stability and distribution-matching accuracy. The method consistently outperforms state-of-the-art baselines across diverse tasks—including image generation, density estimation, and cross-domain matching—while offering enhanced robustness and modeling flexibility.
📝 Abstract
Recent advancement in generative models have demonstrated remarkable performance across various data modalities. Beyond their typical use in data synthesis, these models play a crucial role in distribution matching tasks such as latent variable modeling, domain translation, and domain adaptation. Generative Adversarial Networks (GANs) have emerged as the preferred method of distribution matching due to their efficacy in handling high-dimensional data and their flexibility in accommodating various constraints. However, GANs often encounter challenge in training due to their bi-level min-max optimization objective and susceptibility to mode collapse. In this work, we propose a novel approach for distribution matching inspired by the consistency models employed in Continuous Normalizing Flow (CNF). Our model inherits the advantages of CNF models, such as having a straight forward norm minimization objective, while remaining adaptable to different constraints similar to GANs. We provide theoretical validation of our proposed objective and demonstrate its performance through experiments on synthetic and real-world datasets.