🤖 AI Summary
Diffusion models (DMs) incur substantial computational costs during both training and inference, yet existing work predominantly focuses on inference acceleration. Method: This paper introduces the first “sparse-to-sparse” training paradigm, enabling from-scratch training of structurally sparse Latent Diffusion and ChiroDiff models to jointly improve training and inference efficiency. We propose and evaluate three sparse training algorithms—Static-DM, RigL-DM, and MagRan-DM—that integrate latent-space modeling with structured/unstructured pruning-and-regrowth mechanisms to identify safe and effective sparsity configurations. Results: On six unconditional generation benchmarks, highly sparse models (>80% parameter pruning) achieve stable convergence and surpass dense baselines in generation quality, while significantly reducing parameter count and FLOPs. This demonstrates that structural sparsification can simultaneously enhance efficiency and performance.
📝 Abstract
Diffusion models (DMs) are a powerful type of generative models that have achieved state-of-the-art results in various image synthesis tasks and have shown potential in other domains, such as natural language processing and temporal data modeling. Despite their stable training dynamics and ability to produce diverse high-quality samples, DMs are notorious for requiring significant computational resources, both in the training and inference stages. Previous work has focused mostly on increasing the efficiency of model inference. This paper introduces, for the first time, the paradigm of sparse-to-sparse training to DMs, with the aim of improving both training and inference efficiency. We focus on unconditional generation and train sparse DMs from scratch (Latent Diffusion and ChiroDiff) on six datasets using three different methods (Static-DM, RigL-DM, and MagRan-DM) to study the effect of sparsity in model performance. Our experiments show that sparse DMs are able to match and often outperform their Dense counterparts, while substantially reducing the number of trainable parameters and FLOPs. We also identify safe and effective values to perform sparse-to-sparse training of DMs.