🤖 AI Summary
Diffusion models on high-dimensional data suffer from poor generation quality, low training efficiency, and—critically—failure to preserve the intrinsic geometric structure of the data distribution. Method: This paper proposes a geometry-preserving encoder-decoder framework to replace conventional VAEs, enabling efficient and stable diffusion modeling in the latent space. Contribution/Results: We introduce the first theoretical encoder-decoder framework with rigorous differential-geometric constraints (e.g., isometry or conformality); provide a convergence proof for the encoder and demonstrate its acceleration effect on decoder convergence; and design a theory-guided encoder optimization strategy. Experiments show that our method significantly improves joint training stability and convergence speed, achieves superior generation quality across multiple benchmarks, and reduces training time.
📝 Abstract
Generative modeling aims to generate new data samples that resemble a given dataset, with diffusion models recently becoming the most popular generative model. One of the main challenges of diffusion models is solving the problem in the input space, which tends to be very high-dimensional. Recently, solving diffusion models in the latent space through an encoder that maps from the data space to a lower-dimensional latent space has been considered to make the training process more efficient and has shown state-of-the-art results. The variational autoencoder (VAE) is the most commonly used encoder/decoder framework in this domain, known for its ability to learn latent representations and generate data samples. In this paper, we introduce a novel encoder/decoder framework with theoretical properties distinct from those of the VAE, specifically designed to preserve the geometric structure of the data distribution. We demonstrate the significant advantages of this geometry-preserving encoder in the training process of both the encoder and decoder. Additionally, we provide theoretical results proving convergence of the training process, including convergence guarantees for encoder training, and results showing faster convergence of decoder training when using the geometry-preserving encoder.