A Survey on Cache Methods in Diffusion Models: Toward Efficient Multi-Modal Generation

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models achieve high generation quality but suffer from substantial inference latency, hindering real-time multimodal applications. To address their computational redundancy, this paper proposes a training-free, architecture-agnostic caching-based efficient inference paradigm. Our method enables inter-step information reuse via feature-level stride-based recycling and inter-layer dynamic scheduling. We further introduce the first unified taxonomy for diffusion caching, systematically formalizing its theoretical foundations and evolutionary trajectory—from static reuse to dynamic prediction. Additionally, we integrate complementary techniques including sampling optimization and model distillation. Experiments across diverse tasks—such as image generation and text-to-image synthesis—demonstrate an average 3.2× speedup with significantly reduced computational overhead, while preserving generation fidelity. The proposed framework delivers a general, high-efficiency solution for real-time generative systems.

Technology Category

Application Category

📝 Abstract
Diffusion Models have become a cornerstone of modern generative AI for their exceptional generation quality and controllability. However, their inherent extit{multi-step iterations} and extit{complex backbone networks} lead to prohibitive computational overhead and generation latency, forming a major bottleneck for real-time applications. Although existing acceleration techniques have made progress, they still face challenges such as limited applicability, high training costs, or quality degradation. Against this backdrop, extbf{Diffusion Caching} offers a promising training-free, architecture-agnostic, and efficient inference paradigm. Its core mechanism identifies and reuses intrinsic computational redundancies in the diffusion process. By enabling feature-level cross-step reuse and inter-layer scheduling, it reduces computation without modifying model parameters. This paper systematically reviews the theoretical foundations and evolution of Diffusion Caching and proposes a unified framework for its classification and analysis. Through comparative analysis of representative methods, we show that Diffusion Caching evolves from extit{static reuse} to extit{dynamic prediction}. This trend enhances caching flexibility across diverse tasks and enables integration with other acceleration techniques such as sampling optimization and model distillation, paving the way for a unified, efficient inference framework for future multimodal and interactive applications. We argue that this paradigm will become a key enabler of real-time and efficient generative AI, injecting new vitality into both theory and practice of extit{Efficient Generative Intelligence}.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational overhead in diffusion models
Addressing generation latency for real-time applications
Enabling efficient inference without quality degradation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free architecture-agnostic efficient inference paradigm
Reuses intrinsic computational redundancies in diffusion process
Enables feature-level cross-step reuse and inter-layer scheduling
🔎 Similar Papers
No similar papers found.
J
Jiacheng Liu
Shanghai Jiao Tong University
X
Xinyu Wang
Shanghai Jiao Tong University, Tsinghua University
Yuqi Lin
Yuqi Lin
Zhejiang University
Computer VisionMultimodal Foundation Model
Z
Zhikai Wang
Shanghai Jiao Tong University
P
Peiru Wang
Shanghai Jiao Tong University
P
Peiliang Cai
Shanghai Jiao Tong University
Q
Qinming Zhou
Shanghai Jiao Tong University, Tsinghua University
Z
Zhengan Yan
Shanghai Jiao Tong University
Z
Zexuan Yan
Shanghai Jiao Tong University
Z
Zhengyi Shi
Shanghai Jiao Tong University
Chang Zou
Chang Zou
Intern at EPIC Lab, Shanghai Jiao Tong University
Generative modelsImages and Videos generation
Yue Ma
Yue Ma
Bytedance
NLPDialogue SystemLLM
Linfeng Zhang
Linfeng Zhang
DP Technology; AI for Science Institute
AI for Sciencemulti-scale modelingmolecular simulationdrug/materials design