Forecast the Principal, Stabilize the Residual: Subspace-Aware Feature Caching for Efficient Diffusion Transformers

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency of Diffusion Transformers (DiT) in image and video generation, where existing caching methods struggle to balance speed and generation quality. The authors propose SVD-Cache, a novel framework that reveals— for the first time—the distinct temporal dynamics between principal components and residual subspaces in DiT’s feature space. Leveraging this insight, they design a subspace-aware caching mechanism: principal components are predicted via exponential moving average, while residual components are directly reused. Evaluated on models such as FLUX and HunyuanVideo, SVD-Cache achieves up to 5.55× acceleration with negligible quality degradation. Moreover, the approach is fully compatible with mainstream acceleration techniques, including distillation, quantization, and sparse attention.

Technology Category

Application Category

📝 Abstract
Diffusion Transformer (DiT) models have achieved unprecedented quality in image and video generation, yet their iterative sampling process remains computationally prohibitive. To accelerate inference, feature caching methods have emerged by reusing intermediate representations across timesteps. However, existing caching approaches treat all feature components uniformly. We reveal that DiT feature spaces contain distinct principal and residual subspaces with divergent temporal behavior: the principal subspace evolves smoothly and predictably, while the residual subspace exhibits volatile, low-energy oscillations that resist accurate prediction. Building on this insight, we propose SVD-Cache, a subspace-aware caching framework that decomposes diffusion features via Singular Value Decomposition (SVD), applies exponential moving average (EMA) prediction to the dominant low-rank components, and directly reuses the residual subspace. Extensive experiments demonstrate that SVD-Cache achieves near-lossless across diverse models and methods, including 5.55$\times$ speedup on FLUX and HunyuanVideo, and compatibility with model acceleration techniques including distillation, quantization and sparse attention. Our code is in supplementary material and will be released on Github.
Problem

Research questions and friction points this paper is trying to address.

Diffusion Transformer
feature caching
inference acceleration
subspace decomposition
iterative sampling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion Transformer
feature caching
subspace decomposition
Singular Value Decomposition
inference acceleration
🔎 Similar Papers
No similar papers found.