🤖 AI Summary
Diffusion Transformers are hindered by the quadratic computational complexity of self-attention, and existing pruning methods struggle to simultaneously achieve differentiability, efficiency, and compatibility with hardware-imposed static computational budgets. This work proposes a residual-aware differentiable Top-$k$ pruning framework that integrates a residual-aware straight-through estimator, context-aware routing, and an adaptive scaling strategy. The approach enables deterministic token counts while preserving end-to-end trainability, thereby satisfying the static compilation requirements of modern hardware accelerators. Evaluated on mainstream models such as SD3.5, the method achieves a 1.54× actual speedup and superior image fidelity compared to existing baselines, establishing a new Pareto frontier between computational efficiency and generation quality for the first time.
📝 Abstract
Diffusion Transformers (DiTs) incur prohibitive computational costs due to the quadratic scaling of self-attention. Existing pruning methods fail to simultaneously satisfy differentiability, efficiency, and the strict static budgets required for hardware overhead. To address this, we propose Shiva-DiT, which effectively reconciles these conflicting requirements via Residual-Based Differentiable Top-$k$ Selection. By leveraging a residual-aware straight-through estimator, our method enforces deterministic token counts for static compilation while preserving end-to-end learnability through residual gradient estimation. Furthermore, we introduce a Context-Aware Router and Adaptive Ratio Policy to autonomously learn an adaptive pruning schedule. Experiments on mainstream models, including SD3.5, demonstrate that Shiva-DiT establishes a new Pareto frontier, achieving a 1.54$\times$ wall-clock speedup with superior fidelity compared to existing baselines, effectively eliminating ragged tensor overheads.