🤖 AI Summary
Latent Diffusion Models (LDMs) suffer from slow inference due to iterative denoising steps. Method: We propose LatentCRF—a lightweight, plug-and-play neural layer embedding a continuous Conditional Random Field (CRF) directly into the latent diffusion process. Unlike prior approaches, LatentCRF explicitly models spatiotemporal and semantic dependencies among latent variables, replacing a subset of denoising iterations without architectural modification to the base LDM. Contribution/Results: This work presents the first end-to-end joint modeling of continuous CRFs and latent diffusion, achieving a 33% inference speedup while preserving image fidelity (FID, LPIPS) and generation diversity (L1-entropy). By simultaneously enhancing efficiency, reconstruction accuracy, and generalization, LatentCRF significantly improves the practical deployability of LDMs—without compromising generative quality or requiring retraining.
📝 Abstract
Latent Diffusion Models (LDMs) produce high-quality, photo-realistic images, however, the latency incurred by multiple costly inference iterations can restrict their applicability. We introduce LatentCRF, a continuous Conditional Random Field (CRF) model, implemented as a neural network layer, that models the spatial and semantic relationships among the latent vectors in the LDM. By replacing some of the computationally-intensive LDM inference iterations with our lightweight LatentCRF, we achieve a superior balance between quality, speed and diversity. We increase inference efficiency by 33% with no loss in image quality or diversity compared to the full LDM. LatentCRF is an easy add-on, which does not require modifying the LDM.