Sparse-to-Sparse Training of Diffusion Models

📅 2025-04-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models (DMs) incur substantial computational costs during both training and inference, yet existing work predominantly focuses on inference acceleration. Method: This paper introduces the first “sparse-to-sparse” training paradigm, enabling from-scratch training of structurally sparse Latent Diffusion and ChiroDiff models to jointly improve training and inference efficiency. We propose and evaluate three sparse training algorithms—Static-DM, RigL-DM, and MagRan-DM—that integrate latent-space modeling with structured/unstructured pruning-and-regrowth mechanisms to identify safe and effective sparsity configurations. Results: On six unconditional generation benchmarks, highly sparse models (>80% parameter pruning) achieve stable convergence and surpass dense baselines in generation quality, while significantly reducing parameter count and FLOPs. This demonstrates that structural sparsification can simultaneously enhance efficiency and performance.

Technology Category

Application Category

📝 Abstract
Diffusion models (DMs) are a powerful type of generative models that have achieved state-of-the-art results in various image synthesis tasks and have shown potential in other domains, such as natural language processing and temporal data modeling. Despite their stable training dynamics and ability to produce diverse high-quality samples, DMs are notorious for requiring significant computational resources, both in the training and inference stages. Previous work has focused mostly on increasing the efficiency of model inference. This paper introduces, for the first time, the paradigm of sparse-to-sparse training to DMs, with the aim of improving both training and inference efficiency. We focus on unconditional generation and train sparse DMs from scratch (Latent Diffusion and ChiroDiff) on six datasets using three different methods (Static-DM, RigL-DM, and MagRan-DM) to study the effect of sparsity in model performance. Our experiments show that sparse DMs are able to match and often outperform their Dense counterparts, while substantially reducing the number of trainable parameters and FLOPs. We also identify safe and effective values to perform sparse-to-sparse training of DMs.
Problem

Research questions and friction points this paper is trying to address.

Improving efficiency of sparse-to-sparse training in Diffusion Models
Reducing computational resources in training and inference stages
Matching or outperforming dense models with fewer parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse-to-sparse training for diffusion models
Reduces trainable parameters and FLOPs
Matches or outperforms dense model performance
🔎 Similar Papers
No similar papers found.
I
Ines Cardoso Oliveira
University of Luxembourg, Luxembourg
D
D. Mocanu
University of Luxembourg, Luxembourg
Luis A. Leiva
Luis A. Leiva
University of Luxembourg
Human-Computer InteractionMachine LearningComputational InteractionBio-signal processing