Shiva-DiT: Residual-Based Differentiable Top-$k$ Selection for Efficient Diffusion Transformers

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion Transformers are hindered by the quadratic computational complexity of self-attention, and existing pruning methods struggle to simultaneously achieve differentiability, efficiency, and compatibility with hardware-imposed static computational budgets. This work proposes a residual-aware differentiable Top-$k$ pruning framework that integrates a residual-aware straight-through estimator, context-aware routing, and an adaptive scaling strategy. The approach enables deterministic token counts while preserving end-to-end trainability, thereby satisfying the static compilation requirements of modern hardware accelerators. Evaluated on mainstream models such as SD3.5, the method achieves a 1.54× actual speedup and superior image fidelity compared to existing baselines, establishing a new Pareto frontier between computational efficiency and generation quality for the first time.

Technology Category

Application Category

📝 Abstract
Diffusion Transformers (DiTs) incur prohibitive computational costs due to the quadratic scaling of self-attention. Existing pruning methods fail to simultaneously satisfy differentiability, efficiency, and the strict static budgets required for hardware overhead. To address this, we propose Shiva-DiT, which effectively reconciles these conflicting requirements via Residual-Based Differentiable Top-$k$ Selection. By leveraging a residual-aware straight-through estimator, our method enforces deterministic token counts for static compilation while preserving end-to-end learnability through residual gradient estimation. Furthermore, we introduce a Context-Aware Router and Adaptive Ratio Policy to autonomously learn an adaptive pruning schedule. Experiments on mainstream models, including SD3.5, demonstrate that Shiva-DiT establishes a new Pareto frontier, achieving a 1.54$\times$ wall-clock speedup with superior fidelity compared to existing baselines, effectively eliminating ragged tensor overheads.
Problem

Research questions and friction points this paper is trying to address.

Diffusion Transformers
self-attention
pruning
static budget
computational cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentiable Pruning
Residual-Based Top-k Selection
Diffusion Transformers
Static Budget Optimization
Context-Aware Routing
🔎 Similar Papers
No similar papers found.
Jiaji Zhang
Jiaji Zhang
Zhejiang University
CS
Hailiang Zhao
Hailiang Zhao
ZJU 100 Young Professor, Zhejiang University
Service ComputingEdge ComputingLearning-Augmented Algorithms
G
Guoxuan Zhu
AIOS, Alibaba Group
R
Ruichao Sun
College of Computer Science and Technology, Zhejiang University
J
Jiaju Wu
Nanyang Technological University
X
Xinkui Zhao
College of Computer Science and Technology, Zhejiang University
H
Hanlin Tang
AIOS, Alibaba Group
W
Weiyi Lu
AIOS, Alibaba Group
K
Kan Liu
AIOS, Alibaba Group
T
Tao Lan
AIOS, Alibaba Group
L
Lin Qu
AIOS, Alibaba Group
S
Shuiguang Deng
College of Computer Science and Technology, Zhejiang University