🤖 AI Summary
Diffusion models achieve high generation quality but incur substantial computational overhead; low-bit post-training quantization often suffers significant performance degradation due to neglect of weight/activation outliers. This paper proposes the first joint weight-and-activation quantization framework tailored for diffusion models. Our method introduces two core innovations: (1) Learnable Equivalent Scaling (LES) and channel-wise Power-of-Two Scaling (PTS) to accurately model outlier distributions; and (2) an adaptive timestep-weighted calibration combined with a voting mechanism to enhance quantization robustness—particularly for early denoising steps and high-variance layers—under limited calibration data. Evaluated under W4A6 and W4A8 configurations, our approach achieves state-of-the-art performance on FID, LPIPS, and other metrics, delivering both superior generation quality and stability. The implementation is publicly available.
📝 Abstract
Diffusion models have achieved remarkable success in image generation but come with significant computational costs, posing challenges for deployment in resource-constrained environments. Recent post-training quantization (PTQ) methods have attempted to mitigate this issue by focusing on the iterative nature of diffusion models. However, these approaches often overlook outliers, leading to degraded performance at low bit-widths. In this paper, we propose a DMQ which combines Learned Equivalent Scaling (LES) and channel-wise Power-of-Two Scaling (PTS) to effectively address these challenges. Learned Equivalent Scaling optimizes channel-wise scaling factors to redistribute quantization difficulty between weights and activations, reducing overall quantization error. Recognizing that early denoising steps, despite having small quantization errors, crucially impact the final output due to error accumulation, we incorporate an adaptive timestep weighting scheme to prioritize these critical steps during learning. Furthermore, identifying that layers such as skip connections exhibit high inter-channel variance, we introduce channel-wise Power-of-Two Scaling for activations. To ensure robust selection of PTS factors even with small calibration set, we introduce a voting algorithm that enhances reliability. Extensive experiments demonstrate that our method significantly outperforms existing works, especially at low bit-widths such as W4A6 (4-bit weight, 6-bit activation) and W4A8, maintaining high image generation quality and model stability. The code is available at https://github.com/LeeDongYeun/dmq.