🤖 AI Summary
Diffusion models achieve high generation quality but suffer from substantial inference latency, hindering real-time multimodal applications. To address their computational redundancy, this paper proposes a training-free, architecture-agnostic caching-based efficient inference paradigm. Our method enables inter-step information reuse via feature-level stride-based recycling and inter-layer dynamic scheduling. We further introduce the first unified taxonomy for diffusion caching, systematically formalizing its theoretical foundations and evolutionary trajectory—from static reuse to dynamic prediction. Additionally, we integrate complementary techniques including sampling optimization and model distillation. Experiments across diverse tasks—such as image generation and text-to-image synthesis—demonstrate an average 3.2× speedup with significantly reduced computational overhead, while preserving generation fidelity. The proposed framework delivers a general, high-efficiency solution for real-time generative systems.
📝 Abstract
Diffusion Models have become a cornerstone of modern generative AI for their exceptional generation quality and controllability. However, their inherent extit{multi-step iterations} and extit{complex backbone networks} lead to prohibitive computational overhead and generation latency, forming a major bottleneck for real-time applications. Although existing acceleration techniques have made progress, they still face challenges such as limited applicability, high training costs, or quality degradation. Against this backdrop, extbf{Diffusion Caching} offers a promising training-free, architecture-agnostic, and efficient inference paradigm. Its core mechanism identifies and reuses intrinsic computational redundancies in the diffusion process. By enabling feature-level cross-step reuse and inter-layer scheduling, it reduces computation without modifying model parameters. This paper systematically reviews the theoretical foundations and evolution of Diffusion Caching and proposes a unified framework for its classification and analysis. Through comparative analysis of representative methods, we show that Diffusion Caching evolves from extit{static reuse} to extit{dynamic prediction}. This trend enhances caching flexibility across diverse tasks and enables integration with other acceleration techniques such as sampling optimization and model distillation, paving the way for a unified, efficient inference framework for future multimodal and interactive applications. We argue that this paradigm will become a key enabler of real-time and efficient generative AI, injecting new vitality into both theory and practice of extit{Efficient Generative Intelligence}.