LAPTOP-Diff: Layer Pruning and Normalized Distillation for Compressing Diffusion Models

📅 2024-04-17
🏛️ arXiv.org
📈 Citations: 12
Influential: 0
📄 PDF
🤖 AI Summary
In the AIGC era, existing compression methods for Stable Diffusion models (SDMs) rely on manual layer pruning and imbalanced feature distillation, resulting in low efficiency, poor generalization, and training instability. Method: We propose (1) a provably additive, one-shot layer pruning criterion for efficient and scalable U-Net architecture compression; and (2) normalized feature distillation—a novel technique that mitigates numerical imbalance among multi-objective loss terms. Results: At 50% pruning ratio, our method incurs only a 4.0% drop in PickScore for both SDXL and SDM-v1.5—substantially outperforming state-of-the-art approaches (which suffer ≥8.2% degradation). The compressed models maintain high generation fidelity while achieving significant parameter reduction, enabling deployment under low-resource and edge-device constraints.

Technology Category

Application Category

📝 Abstract
In the era of AIGC, the demand for low-budget or even on-device applications of diffusion models emerged. In terms of compressing the Stable Diffusion models (SDMs), several approaches have been proposed, and most of them leveraged the handcrafted layer removal methods to obtain smaller U-Nets, along with knowledge distillation to recover the network performance. However, such a handcrafting manner of layer removal is inefficient and lacks scalability and generalization, and the feature distillation employed in the retraining phase faces an imbalance issue that a few numerically significant feature loss terms dominate over others throughout the retraining process. To this end, we proposed the layer pruning and normalized distillation for compressing diffusion models (LAPTOP-Diff). We, 1) introduced the layer pruning method to compress SDM's U-Net automatically and proposed an effective one-shot pruning criterion whose one-shot performance is guaranteed by its good additivity property, surpassing other layer pruning and handcrafted layer removal methods, 2) proposed the normalized feature distillation for retraining, alleviated the imbalance issue. Using the proposed LAPTOP-Diff, we compressed the U-Nets of SDXL and SDM-v1.5 for the most advanced performance, achieving a minimal 4.0% decline in PickScore at a pruning ratio of 50% while the comparative methods' minimal PickScore decline is 8.2%.
Problem

Research questions and friction points this paper is trying to address.

Compress diffusion models for low-budget applications
Improve layer pruning efficiency and scalability
Address feature distillation imbalance in retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated layer pruning for U-Net compression
Normalized feature distillation for balanced retraining
One-shot pruning criterion with additivity property
🔎 Similar Papers
No similar papers found.
D
Dingkun Zhang
OPPO AI Center, Shenzhen, China
Sijia Li
Sijia Li
Institute of Information Engineering, Chinese Academy of Sciences
C
Chen Chen
OPPO AI Center, Shenzhen, China
Q
Qingsong Xie
OPPO AI Center, Shenzhen, China
H
Haonan Lu
OPPO AI Center, Shenzhen, China