🤖 AI Summary
To address parameter redundancy in large-scale vision generative models—such as diffusion and flow models—during downstream deployment, this paper proposes EntPruner, an entropy-guided progressive pruning framework. Methodologically, it introduces *conditional entropy deviation* as a novel block-level importance metric and integrates a zero-shot adaptive pruning strategy to dynamically determine optimal pruning timing and granularity without fine-tuning, thereby preserving distribution fidelity under compression. EntPruner automatically simplifies model structures across mainstream architectures—including DiT and SiT—while maintaining generative capability. Evaluated on ImageNet and three downstream datasets, it achieves up to 2.22× inference speedup with state-of-the-art generation quality (e.g., FID, LPIPS), demonstrating significant improvements in deployment efficiency for vision generative models.
📝 Abstract
Large-scale vision generative models, including diffusion and flow models, have demonstrated remarkable performance in visual generation tasks. However, transferring these pre-trained models to downstream tasks often results in significant parameter redundancy. In this paper, we propose EntPruner, an entropy-guided automatic progressive pruning framework for diffusion and flow models. First, we introduce entropy-guided pruning, a block-level importance assessment strategy specifically designed for generative models. Unlike discriminative models, generative models require preserving the diversity and condition-fidelity of the output distribution. As the importance of each module can vary significantly across downstream tasks, EntPruner prioritizes pruning of less important blocks using data-dependent Conditional Entropy Deviation (CED) as a guiding metric. CED quantifies how much the distribution diverges from the learned conditional data distribution after removing a block. Second, we propose a zero-shot adaptive pruning framework to automatically determine when and how much to prune during training. This dynamic strategy avoids the pitfalls of one-shot pruning, mitigating mode collapse, and preserving model performance. Extensive experiments on DiT and SiT models demonstrate the effectiveness of EntPruner, achieving up to 2.22$ imes$ inference speedup while maintaining competitive generation quality on ImageNet and three downstream datasets.