CacheQuant: Comprehensively Accelerated Diffusion Models

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models suffer from slow inference and structural redundancy, hindering low-latency deployment. Existing acceleration methods optimize either sampling steps (temporal dimension) or parameter count (structural dimension) in isolation, often causing performance collapse; naive joint optimization exacerbates error accumulation due to the non-orthogonality between these dimensions. This paper proposes a training-agnostic cache-quantization co-acceleration paradigm. We introduce a dynamic programming–based joint scheduling strategy for optimal cache allocation and weight quantization, and design a decoupled error correction mechanism to suppress cross-step error propagation. Evaluated on MS-COCO, our method achieves 5.18× inference speedup and 4× model compression for Stable Diffusion, with only a marginal CLIP score degradation of 0.02—significantly surpassing the limitations of single-dimension optimization.

Technology Category

Application Category

📝 Abstract
Diffusion models have gradually gained prominence in the field of image synthesis, showcasing remarkable generative capabilities. Nevertheless, the slow inference and complex networks, resulting from redundancy at both temporal and structural levels, hinder their low-latency applications in real-world scenarios. Current acceleration methods for diffusion models focus separately on temporal and structural levels. However, independent optimization at each level to further push the acceleration limits results in significant performance degradation. On the other hand, integrating optimizations at both levels can compound the acceleration effects. Unfortunately, we find that the optimizations at these two levels are not entirely orthogonal. Performing separate optimizations and then simply integrating them results in unsatisfactory performance. To tackle this issue, we propose CacheQuant, a novel training-free paradigm that comprehensively accelerates diffusion models by jointly optimizing model caching and quantization techniques. Specifically, we employ a dynamic programming approach to determine the optimal cache schedule, in which the properties of caching and quantization are carefully considered to minimize errors. Additionally, we propose decoupled error correction to further mitigate the coupled and accumulated errors step by step. Experimental results show that CacheQuant achieves a 5.18 speedup and 4 compression for Stable Diffusion on MS-COCO, with only a 0.02 loss in CLIP score. Our code are open-sourced: https://github.com/BienLuky/CacheQuant .
Problem

Research questions and friction points this paper is trying to address.

Accelerates diffusion models for low-latency applications
Optimizes caching and quantization jointly to reduce redundancy
Mitigates performance degradation from separate temporal and structural optimizations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Jointly optimizes model caching and quantization
Uses dynamic programming for optimal cache schedule
Implements decoupled error correction technique
🔎 Similar Papers
No similar papers found.
Xuewen Liu
Xuewen Liu
Institute of Automation, Chinese Academy of Sciences
Model compression
Z
Zhikai Li
Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences
Qingyi Gu
Qingyi Gu
Institute of Automation, Chinese Academy of Sciences
High-speed visioncell analysis