Attend to Not Attended: Structure-then-Detail Token Merging for Post-training DiT Acceleration

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion Transformers (DiTs) suffer from high inference computational overhead, and existing token compression methods neglect diffusion denoising priors, resulting in limited acceleration and degraded generation quality. To address this, we propose a structural-priority, detail-secondary denoising prior and introduce the first dynamic token pruning paradigm that “focuses on unattended regions”: explicitly modeling the hierarchical structure-detail prior inherent in the diffusion process to enable adaptive merging of visually non-critical tokens; and incorporating prompt reweighting with a training-free post-training token merging framework. Our method is architecture-agnostic—compatible with arbitrary DiT backbones, schedulers, and datasets—and achieves 1.55× inference speedup while preserving FID and CLIP-Score nearly losslessly. It significantly outperforms existing token compression approaches across diverse benchmarks.

Technology Category

Application Category

📝 Abstract
Diffusion transformers have shown exceptional performance in visual generation but incur high computational costs. Token reduction techniques that compress models by sharing the denoising process among similar tokens have been introduced. However, existing approaches neglect the denoising priors of the diffusion models, leading to suboptimal acceleration and diminished image quality. This study proposes a novel concept: attend to prune feature redundancies in areas not attended by the diffusion process. We analyze the location and degree of feature redundancies based on the structure-then-detail denoising priors. Subsequently, we introduce SDTM, a structure-then-detail token merging approach that dynamically compresses feature redundancies. Specifically, we design dynamic visual token merging, compression ratio adjusting, and prompt reweighting for different stages. Served in a post-training way, the proposed method can be integrated seamlessly into any DiT architecture. Extensive experiments across various backbones, schedulers, and datasets showcase the superiority of our method, for example, it achieves 1.55 times acceleration with negligible impact on image quality. Project page: https://github.com/ICTMCG/SDTM.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational costs in diffusion transformers
Improves token merging using denoising priors
Maintains image quality while accelerating processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structure-then-detail token merging for DiT
Dynamic compression based on denoising priors
Post-training integration into any DiT architecture
🔎 Similar Papers
No similar papers found.