DiffSparse: Accelerating Diffusion Transformers with Learned Token Sparsity

📅 2026-04-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing diffusion Transformers struggle to achieve efficient acceleration in few-step inference due to inefficient feature caching, handcrafted sparsity strategies, and the necessity of full forward passes in certain steps. This work proposes a differentiable, layer-wise sparse optimization framework that, for the first time, enables end-to-end learned token sparsity allocation in diffusion Transformers, eliminating reliance on heuristic rules. By integrating a dynamic programming solver with a two-stage training strategy, the method completely avoids full-step forward computation. Evaluated on mainstream models such as PixArt-α, the approach reduces computational cost by 54% under 20-step sampling while achieving superior generation quality compared to the original model, significantly outperforming existing acceleration techniques.
📝 Abstract
Diffusion models demonstrate outstanding performance in image generation, but their multi-step inference mechanism requires immense computational cost. Previous works accelerate inference by leveraging layer or token cache techniques to reduce computational cost. However, these methods fail to achieve superior acceleration performance in few-step diffusion transformer models due to inefficient feature caching strategies, manually designed sparsity allocation, and the practice of retaining complete forward computations in several steps in these token cache methods. To tackle these challenges, we propose a differentiable layer-wise sparsity optimization framework for diffusion transformer models, leveraging token caching to reduce token computation costs and enhance acceleration. Our method optimizes layer-wise sparsity allocation in an end-to-end manner through a learnable network combined with a dynamic programming solver. Additionally, our proposed two-stage training strategy eliminates the need for full-step processing in existing methods, further improving efficiency. We conducted extensive experiments on a range of diffusion-transformer models, including DiT-XL/2, PixArt-$α$, FLUX, and Wan2.1. Across these architectures, our method consistently improves efficiency without degrading sample quality. For example, on PixArt-$α$ with 20 sampling steps, we reduce computational cost by $54\%$ while achieving generation metrics that surpass those of the original model, substantially outperforming prior approaches. These results demonstrate that our method delivers large efficiency gains while often improving generation quality.
Problem

Research questions and friction points this paper is trying to address.

diffusion models
computational cost
token sparsity
inference acceleration
diffusion transformers
Innovation

Methods, ideas, or system contributions that make the work stand out.

diffusion transformers
token sparsity
differentiable optimization
acceleration
dynamic programming
🔎 Similar Papers
No similar papers found.