Training Latent Diffusion Models with Interacting Particle Algorithms

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of end-to-end training in latent diffusion models (LDMs). Methodologically, it formulates the training objective as a gradient flow of a free energy functional in Wasserstein space and introduces, for the first time, an interacting particle system to yield a differentiable, unbiased approximation of this flow. Theoretically, it establishes an explicit error bound for the particle approximation, overcoming convergence and stability limitations inherent in conventional variational inference and existing particle-based methods. Experimentally, the proposed framework consistently outperforms state-of-the-art particle-based algorithms and variational training schemes across multiple benchmarks—yielding more stable training dynamics and higher-fidelity generated images.

Technology Category

Application Category

📝 Abstract
We introduce a novel particle-based algorithm for end-to-end training of latent diffusion models. We reformulate the training task as minimizing a free energy functional and obtain a gradient flow that does so. By approximating the latter with a system of interacting particles, we obtain the algorithm, which we underpin it theoretically by providing error guarantees. The novel algorithm compares favorably in experiments with previous particle-based methods and variational inference analogues.
Problem

Research questions and friction points this paper is trying to address.

Develop particle-based algorithm for latent diffusion training
Reformulate training as free energy minimization
Provide theoretical error guarantees for algorithm
Innovation

Methods, ideas, or system contributions that make the work stand out.

Particle-based algorithm for latent diffusion training
Minimizing free energy via gradient flow
Theoretical error guarantees for particle approximation
🔎 Similar Papers
No similar papers found.