CoVAE: Consistency Training of Variational Autoencoders

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing generative approaches typically rely on two-stage training—first pretraining a VAE, then training a generative model in the latent space—resulting in high computational cost and slow sampling. This work proposes CoVAE, a single-stage generative autoencoder framework that integrates consistency modeling into the VAE architecture for end-to-end joint optimization. Its key innovations include: (i) a time-dependent β-scheduling mechanism to control KL divergence, enabling progressive latent-space refinement; (ii) an encoder that explicitly models multi-level noisy latent representations to emulate the diffusion forward process; and (iii) a decoder trained jointly via consistency loss and variational regularization. CoVAE enables high-fidelity generation in a single step or very few steps, significantly outperforming conventional VAEs and existing single-stage methods in both sample quality and inference speed, while offering theoretical coherence and practical deployment efficiency.

Technology Category

Application Category

📝 Abstract
Current state-of-the-art generative approaches frequently rely on a two-stage training procedure, where an autoencoder (often a VAE) first performs dimensionality reduction, followed by training a generative model on the learned latent space. While effective, this introduces computational overhead and increased sampling times. We challenge this paradigm by proposing Consistency Training of Variational AutoEncoders (CoVAE), a novel single-stage generative autoencoding framework that adopts techniques from consistency models to train a VAE architecture. The CoVAE encoder learns a progressive series of latent representations with increasing encoding noise levels, mirroring the forward processes of diffusion and flow matching models. This sequence of representations is regulated by a time dependent $β$ parameter that scales the KL loss. The decoder is trained using a consistency loss with variational regularization, which reduces to a conventional VAE loss at the earliest latent time. We show that CoVAE can generate high-quality samples in one or few steps without the use of a learned prior, significantly outperforming equivalent VAEs and other single-stage VAEs methods. Our approach provides a unified framework for autoencoding and diffusion-style generative modeling and provides a viable route for one-step generative high-performance autoencoding. Our code is publicly available at https://github.com/gisilvs/covae.
Problem

Research questions and friction points this paper is trying to address.

Eliminates two-stage training in generative models
Reduces computational overhead and sampling times
Unifies autoencoding and diffusion-style generative modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Single-stage VAE with consistency training
Progressive latent representations with noise
Consistency loss with variational regularization
🔎 Similar Papers
No similar papers found.