🤖 AI Summary
Diffusion models suffer from slow inference and high energy consumption. To address this, we propose a synergistic optimization framework integrating aggressive quantization with dynamic temporal sparsity. Our approach introduces, for the first time, a temporal-aware sparsity detection mechanism, coupled with channel-wise adaptive sparsity modeling and a heterogeneous mixed-precision dense-sparse architecture. Specifically, we employ 4-bit joint weight–activation quantization, end-of-channel address mapping, and timestep-aware sparsity decision policies. Evaluated on standard benchmarks, our method achieves comparable generation quality (FID ≈ 2.8) while delivering a 6.91× inference speedup and 51.5% energy reduction over conventional dense accelerators. This significantly enhances hardware efficiency and practical deployability of diffusion models without compromising fidelity.
📝 Abstract
Diffusion models have gained significant popularity in image generation tasks. However, generating high-quality content remains notably slow because it requires running model inference over many time steps. To accelerate these models, we propose to aggressively quantize both weights and activations, while simultaneously promoting significant activation sparsity. We further observe that the stated sparsity pattern varies among different channels and evolves across time steps. To support this quantization and sparsity scheme, we present a novel diffusion model accelerator featuring a heterogeneous mixed-precision dense-sparse architecture, channel-last address mapping, and a time-step-aware sparsity detector for efficient handling of the sparsity pattern. Our 4-bit quantization technique demonstrates superior generation quality compared to existing 4-bit methods. Our custom accelerator achieves 6.91x speed-up and 51.5% energy reduction compared to traditional dense accelerators.