🤖 AI Summary
To address high attention computation complexity, abrupt error surges, and detail distortion in diffusion Transformer (DiT)-based video generation, this paper proposes a unified caching and pruning framework. Our method introduces three core innovations: (1) an Error-Aware Dynamic Caching Window (EDCW), which adaptively adjusts the caching span based on the U-shaped distribution of attention differences; (2) PCA-based Slicing (PCAS), which applies principal component analysis to spatiotemporal attention features for identifying redundant channels; and (3) Dynamic Weight Shifting (DWS), which smoothly migrates critical weight paths after pruning. These components synergistically enable adaptive integration of caching and pruning. Experiments demonstrate that our approach significantly improves inference speed and GPU memory efficiency while preserving fine-grained video fidelity—achieving state-of-the-art performance across key metrics including FVD and LPIPS, outperforming existing caching- and pruning-only methods.
📝 Abstract
Diffusion Transformers (DiT) excel in video generation but encounter significant computational challenges due to the quadratic complexity of attention. Notably, attention differences between adjacent diffusion steps follow a U-shaped pattern. Current methods leverage this property by caching attention blocks, however, they still struggle with sudden error spikes and large discrepancies. To address these issues, we propose UniCP a unified caching and pruning framework for efficient video generation. UniCP optimizes both temporal and spatial dimensions through. Error Aware Dynamic Cache Window (EDCW): Dynamically adjusts cache window sizes for different blocks at various timesteps, adapting to abrupt error changes. PCA based Slicing (PCAS) and Dynamic Weight Shift (DWS): PCAS prunes redundant attention components, and DWS integrates caching and pruning by enabling dynamic switching between pruned and cached outputs. By adjusting cache windows and pruning redundant components, UniCP enhances computational efficiency and maintains video detail fidelity. Experimental results show that UniCP outperforms existing methods in both performance and efficiency.