SpeCa: Accelerating Diffusion Transformers with Speculative Feature Caching

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models suffer from strict sequential dependencies and high computational overhead, hindering real-time inference. This paper proposes SpeCa—the first framework to successfully adapt speculative decoding to diffusion Transformers—introducing a novel “predict-verify” mechanism: speculative feature caching enables parallel denoising, while parameter-free lightweight verification and sample-adaptive computation scheduling achieve efficient inference control without introducing extra model parameters. Evaluated on FLUX, DiT, and HunyuanVideo, SpeCa achieves 6.34× speedup (5.5% quality drop), 7.3× speedup (zero fidelity loss), and 6.1× speedup (VBench score 79.84%), respectively, with verification overhead仅为1.67%–3.5%. These results demonstrate a significant breakthrough in overcoming the inference efficiency bottleneck of diffusion models.

Technology Category

Application Category

📝 Abstract
Diffusion models have revolutionized high-fidelity image and video synthesis, yet their computational demands remain prohibitive for real-time applications. These models face two fundamental challenges: strict temporal dependencies preventing parallelization, and computationally intensive forward passes required at each denoising step. Drawing inspiration from speculative decoding in large language models, we present SpeCa, a novel 'Forecast-then-verify' acceleration framework that effectively addresses both limitations. SpeCa's core innovation lies in introducing Speculative Sampling to diffusion models, predicting intermediate features for subsequent timesteps based on fully computed reference timesteps. Our approach implements a parameter-free verification mechanism that efficiently evaluates prediction reliability, enabling real-time decisions to accept or reject each prediction while incurring negligible computational overhead. Furthermore, SpeCa introduces sample-adaptive computation allocation that dynamically modulates resources based on generation complexity, allocating reduced computation for simpler samples while preserving intensive processing for complex instances. Experiments demonstrate 6.34x acceleration on FLUX with minimal quality degradation (5.5% drop), 7.3x speedup on DiT while preserving generation fidelity, and 79.84% VBench score at 6.1x acceleration for HunyuanVideo. The verification mechanism incurs minimal overhead (1.67%-3.5% of full inference costs), establishing a new paradigm for efficient diffusion model inference while maintaining generation quality even at aggressive acceleration ratios. Our codes have been released in Github: extbf{https://github.com/Shenyi-Z/Cache4Diffusion}
Problem

Research questions and friction points this paper is trying to address.

Accelerating diffusion models for real-time image synthesis
Reducing computational costs of iterative denoising steps
Maintaining generation quality while achieving significant speedup
Innovation

Methods, ideas, or system contributions that make the work stand out.

Speculative Sampling for feature prediction
Parameter-free verification for reliability checks
Sample-adaptive computation allocation for efficiency
🔎 Similar Papers
No similar papers found.