TR-DQ: Time-Rotation Diffusion Quantization

📅 2025-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing diffusion model quantization methods neglect the dynamic distribution shifts across sampling timesteps and struggle with residual significant activations, leading to substantial performance degradation. To address these issues, we propose Temporal-Aware Rotational Quantization (TARQ), a novel quantization framework for diffusion models. TARQ introduces timestep-adaptive piecewise quantization and an orthogonal rotation matrix smoothing mechanism; it employs timestep-specific hyperparameters to enable dynamic quantization of weights and activations, and—uniquely—incorporates Classifier-Free Guidance (CFG)-aware compression. Evaluated on image and video generation benchmarks, TARQ achieves state-of-the-art (SOTA) fidelity and perceptual quality. It accelerates inference by 1.38–1.89× and reduces GPU memory consumption by 1.97–2.58× compared to full-precision baselines, without compromising generation quality.

Technology Category

Application Category

📝 Abstract
Diffusion models have been widely adopted in image and video generation. However, their complex network architecture leads to high inference overhead for its generation process. Existing diffusion quantization methods primarily focus on the quantization of the model structure while ignoring the impact of time-steps variation during sampling. At the same time, most current approaches fail to account for significant activations that cannot be eliminated, resulting in substantial performance degradation after quantization. To address these issues, we propose Time-Rotation Diffusion Quantization (TR-DQ), a novel quantization method incorporating time-step and rotation-based optimization. TR-DQ first divides the sampling process based on time-steps and applies a rotation matrix to smooth activations and weights dynamically. For different time-steps, a dedicated hyperparameter is introduced for adaptive timing modeling, which enables dynamic quantization across different time steps. Additionally, we also explore the compression potential of Classifier-Free Guidance (CFG-wise) to establish a foundation for subsequent work. TR-DQ achieves state-of-the-art (SOTA) performance on image generation and video generation tasks and a 1.38-1.89x speedup and 1.97-2.58x memory reduction in inference compared to existing quantization methods.
Problem

Research questions and friction points this paper is trying to address.

Reduces high inference overhead in diffusion models.
Addresses time-step variation impact during sampling.
Minimizes performance degradation from non-eliminable activations.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Time-step division with rotation matrix optimization
Dynamic quantization across different time steps
Classifier-Free Guidance compression for performance enhancement
🔎 Similar Papers
No similar papers found.