Accelerating Diffusion Transformers with Dual Feature Caching

📅 2024-12-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the slow inference speed and high computational overhead of Diffusion Transformers (DiTs) in image/video generation, this paper proposes a dual-mode feature caching mechanism. Our method introduces a dynamic scheduling framework that alternates between aggressive and conservative caching strategies—a novel design—and develops a training-free, token-level V-caching technique compatible with Flash Attention. Guided by error propagation analysis, the framework enables fine-grained, dynamic token caching decisions. Evaluated on standard DiT-based generative models, our approach achieves state-of-the-art (SOTA) visual quality while significantly outperforming existing caching methods in speedup—up to 2.1× higher acceleration ratio under comparable quality constraints. Crucially, it requires no additional training, fine-tuning, or calibration data. The implementation is fully open-sourced.

Technology Category

Application Category

📝 Abstract
Diffusion Transformers (DiT) have become the dominant methods in image and video generation yet still suffer substantial computational costs. As an effective approach for DiT acceleration, feature caching methods are designed to cache the features of DiT in previous timesteps and reuse them in the next timesteps, allowing us to skip the computation in the next timesteps. However, on the one hand, aggressively reusing all the features cached in previous timesteps leads to a severe drop in generation quality. On the other hand, conservatively caching only the features in the redundant layers or tokens but still computing the important ones successfully preserves the generation quality but results in reductions in acceleration ratios. Observing such a tradeoff between generation quality and acceleration performance, this paper begins by quantitatively studying the accumulated error from cached features. Surprisingly, we find that aggressive caching does not introduce significantly more caching errors in the caching step, and the conservative feature caching can fix the error introduced by aggressive caching. Thereby, we propose a dual caching strategy that adopts aggressive and conservative caching iteratively, leading to significant acceleration and high generation quality at the same time. Besides, we further introduce a V-caching strategy for token-wise conservative caching, which is compatible with flash attention and requires no training and calibration data. Our codes have been released in Github: extbf{Code: href{https://github.com/Shenyi-Z/DuCa}{ exttt{ extcolor{cyan}{https://github.com/Shenyi-Z/DuCa}}}}
Problem

Research questions and friction points this paper is trying to address.

Image Generation
Diffusion Transformers
Computational Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual Character Caching (DuCa)
Accelerated Diffusion Transformer (DiT)
V-Cache Strategy
🔎 Similar Papers
No similar papers found.