π€ AI Summary
Diffusion models suffer from high computational overhead, hindering deployment on edge devices; meanwhile, existing ultra-low-bit (2β4 bit) quantization methods are vulnerable to outliers, suffer from suboptimal initialization, and fail to preserve temporal consistency in sequential generation. To address these challenges, we propose a flexible mixed-precision quantization framework comprising: (1) flexible Z-order residual quantization to mitigate outlier sensitivity; (2) object-aware low-rank initialization for enhanced training stability; and (3) memory-augmented temporal relation distillation to improve long-sequence generation consistency. Leveraging binary residual branches, LoRA-driven module-wise analysis, and an online pixel queue mechanism, our method significantly outperforms state-of-the-art approaches across diverse diffusion architectures and generation tasks. It maintains high-fidelity synthesis and efficient inference at 2β4 bits, achieving a synergistic optimization of compression ratio and stability.
π Abstract
Diffusion models have demonstrated remarkable performance on vision generation tasks. However, the high computational complexity hinders its wide application on edge devices. Quantization has emerged as a promising technique for inference acceleration and memory reduction. However, existing quantization methods do not generalize well under extremely low-bit (2-4 bit) quantization. Directly applying these methods will cause severe performance degradation. We identify that the existing quantization framework suffers from the outlier-unfriendly quantizer design, suboptimal initialization, and optimization strategy. We present MPQ-DMv2, an improved extbf{M}ixed extbf{P}recision extbf{Q}uantization framework for extremely low-bit extbf{D}iffusion extbf{M}odels. For the quantization perspective, the imbalanced distribution caused by salient outliers is quantization-unfriendly for uniform quantizer. We propose extit{Flexible Z-Order Residual Mixed Quantization} that utilizes an efficient binary residual branch for flexible quant steps to handle salient error. For the optimization framework, we theoretically analyzed the convergence and optimality of the LoRA module and propose extit{Object-Oriented Low-Rank Initialization} to use prior quantization error for informative initialization. We then propose extit{Memory-based Temporal Relation Distillation} to construct an online time-aware pixel queue for long-term denoising temporal information distillation, which ensures the overall temporal consistency between quantized and full-precision model. Comprehensive experiments on various generation tasks show that our MPQ-DMv2 surpasses current SOTA methods by a great margin on different architectures, especially under extremely low-bit widths.