Enhanced Distribution Alignment for Post-Training Quantization of Diffusion Models

📅 2024-01-09
🏛️ arXiv.org
📈 Citations: 14
Influential: 2
📄 PDF
🤖 AI Summary
Diffusion models suffer from high latency due to numerous denoising steps and complex network architectures, hindering low-latency deployment. Existing post-training quantization (PTQ) methods struggle with severe dynamic shifts in latent-space activations, causing distribution mismatch both during calibration and output reconstruction—leading to sharp performance degradation at low bit-widths. To address this, we propose an enhanced distribution alignment PTQ framework that jointly optimizes density- and diversity-driven calibration sample selection in the latent space and hierarchical output reconstruction loss, achieving block-level accuracy preservation without fine-tuning. Extensive evaluation on DDIM, Latent Diffusion Models (LDM), and Stable Diffusion across CIFAR-10, LSUN, and ImageNet demonstrates significant improvements over state-of-the-art PTQ methods: FID improves by over 30% at 4-bit quantization, with consistent robustness across diverse diffusion architectures.

Technology Category

Application Category

📝 Abstract
Diffusion models have achieved great success in image generation tasks through iterative noise estimation. However, the heavy denoising process and complex neural networks hinder their low-latency applications in real-world scenarios. Quantization can effectively reduce model complexity, and post-training quantization (PTQ), which does not require fine-tuning, is highly promising for compressing and accelerating diffusion models. Unfortunately, we find that due to the highly dynamic distribution of activations in different denoising steps, existing PTQ methods for diffusion models suffer from distribution mismatch issues at both calibration sample level and reconstruction output level, which makes the performance far from satisfactory, especially in low-bit cases. In this paper, we propose Enhanced Distribution Alignment for Post-Training Quantization of Diffusion Models (EDA-DM) to address the above issues. Specifically, at the calibration sample level, we select calibration samples based on the density and variety in the latent space, thus facilitating the alignment of their distribution with the overall samples; and at the reconstruction output level, we modify the loss of block reconstruction with the losses of layers, aligning the outputs of quantized model and full-precision model at different network granularity. Extensive experiments demonstrate that EDA-DM significantly outperforms the existing PTQ methods across various models (DDIM, LDM-4, LDM-8, Stable-Diffusion) and different datasets (CIFAR-10, LSUN-Bedroom, LSUN-Church, ImageNet, MS-COCO).
Problem

Research questions and friction points this paper is trying to address.

Addresses distribution mismatch in diffusion model quantization
Improves calibration sample selection for better alignment
Optimizes block reconstruction using Hessian loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligns calibration samples via latent space features
Optimizes block reconstruction using Hessian loss
Standardized PTQ for diffusion models
🔎 Similar Papers
No similar papers found.