🤖 AI Summary
Existing diffusion-based video editing methods suffer from high computational costs due to iterative denoising and overlook the redundancy in spatiotemporal attention within DiT architectures. This work proposes HetCache, a training-free framework that leverages spatial priors to partition spatiotemporal tokens into contextual and generative categories. By evaluating contextual relevance, HetCache selectively caches the most semantically representative contextual tokens, enabling heterogeneous caching. This approach transcends the limitation of merely reusing features across timesteps, achieving a 2.67× reduction in latency and FLOPs on mainstream DiT models while preserving editing quality with negligible degradation.
📝 Abstract
Diffusion-based video editing has emerged as an important paradigm for high-quality and flexible content generation. However, despite their generality and strong modeling capacity, Diffusion Transformers (DiT) remain computationally expensive due to the iterative denoising process, posing challenges for practical deployment. Existing video diffusion acceleration methods primarily exploit denoising timestep-level feature reuse, which mitigates the redundancy in denoising process, but overlooks the architectural redundancy within the DiT that many attention operations over spatio-temporal tokens are redundantly executed, offering little to no incremental contribution to the model output. This work introduces HetCache, a training-free diffusion acceleration framework designed to exploit the inherent heterogeneity in diffusion-based masked video-to-video (MV2V) generation and editing. Instead of uniformly reuse or randomly sampling tokens, HetCache assesses the contextual relevance and interaction strength among various types of tokens in designated computing steps. Guided by spatial priors, it divides the spatial-temporal tokens in DiT model into context and generative tokens, and selectively caches the context tokens that exhibit the strongest correlation and most representative semantics with generative ones. This strategy reduces redundant attention operations while maintaining editing consistency and fidelity. Experiments show that HetCache achieves a noticeable acceleration, including a 2.67$\times$ latency speedup and FLOPs reduction over commonly used foundation models, with negligible degradation in editing quality.