🤖 AI Summary
Text-to-image diffusion models suffer from prohibitively high memory and computational requirements, hindering efficient deployment on edge devices; low-bit (<8-bit) quantization often degrades image fidelity and text–image alignment. To address this, we propose the first fine-tuning-free, distribution-aware grouped quantization framework. Our method identifies outliers and performs pixel- and channel-adaptive grouping, while explicitly modeling semantic sensitivity in cross-attention mechanisms via prompt-specific quantization scale design. Evaluated on MS-COCO and PartiPrompts, our approach preserves high-fidelity generation and strong text–image alignment even at 2–4 bits, achieving substantial reductions in GPU memory footprint and computational cost. This work establishes a novel paradigm for efficient low-bit diffusion model deployment on resource-constrained hardware.
📝 Abstract
Despite the widespread use of text-to-image diffusion models across various tasks, their computational and memory demands limit practical applications. To mitigate this issue, quantization of diffusion models has been explored. It reduces memory usage and computational costs by compressing weights and activations into lower-bit formats. However, existing methods often struggle to preserve both image quality and text-image alignment, particularly in lower-bit($<$ 8bits) quantization. In this paper, we analyze the challenges associated with quantizing text-to-image diffusion models from a distributional perspective. Our analysis reveals that activation outliers play a crucial role in determining image quality. Additionally, we identify distinctive patterns in cross-attention scores, which significantly affects text-image alignment. To address these challenges, we propose Distribution-aware Group Quantization (DGQ), a method that identifies and adaptively handles pixel-wise and channel-wise outliers to preserve image quality. Furthermore, DGQ applies prompt-specific logarithmic quantization scales to maintain text-image alignment. Our method demonstrates remarkable performance on datasets such as MS-COCO and PartiPrompts. We are the first to successfully achieve low-bit quantization of text-to-image diffusion models without requiring additional fine-tuning of weight quantization parameters.