🤖 AI Summary
Diffusion Transformer models suffer from prohibitively high computational cost and multi-step sampling latency, hindering real-time deployment. This paper proposes a training-free cache-reuse acceleration method that models cross-step feature similarity from a global denoising trajectory perspective, enabling efficient reuse of redundant computations. Our approach introduces a dynamic noise-aware cache allocation and filtering mechanism. Key contributions include: (1) a systematic cache analysis framework grounded in sampling trajectories; (2) a training-agnostic, plug-and-play global cache scheduling strategy; and (3) adaptive cache filtering guided by dynamic noise estimation. Evaluated on image and video generation tasks, the method achieves 2.1–3.4× speedup in inference time while preserving fidelity metrics—including FID and CLIP score—demonstrating significant gains in both efficiency and practical deployability.
📝 Abstract
Diffusion models have emerged as a powerful paradigm for generative tasks such as image synthesis and video generation, with Transformer architectures further enhancing performance. However, the high computational cost of diffusion Transformers-stemming from a large number of sampling steps and complex per-step computations-presents significant challenges for real-time deployment. In this paper, we introduce OmniCache, a training-free acceleration method that exploits the global redundancy inherent in the denoising process. Unlike existing methods that determine caching strategies based on inter-step similarities and tend to prioritize reusing later sampling steps, our approach originates from the sampling perspective of DIT models. We systematically analyze the model's sampling trajectories and strategically distribute cache reuse across the entire sampling process. This global perspective enables more effective utilization of cached computations throughout the diffusion trajectory, rather than concentrating reuse within limited segments of the sampling procedure.In addition, during cache reuse, we dynamically estimate the corresponding noise and filter it out to reduce its impact on the sampling direction.Extensive experiments demonstrate that our approach accelerates the sampling process while maintaining competitive generative quality, offering a promising and practical solution for efficient deployment of diffusion-based generative models.