AccelAes: Accelerating Diffusion Transformers for Training-Free Aesthetic-Enhanced Image Generation

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high inference latency of Diffusion Transformers (DiT) caused by the quadratic complexity of self-attention over spatial tokens, which hinders practical deployment. The authors propose a training-free acceleration framework that, for the first time, leverages prompt semantics and cross-attention signals to construct a one-shot aesthetic focus mask (AesMask). This mask dynamically reallocates computational resources to regions of higher aesthetic value. Combined with SkipSparse local computation and a timestep prediction caching mechanism, the approach significantly reduces redundant computation. Evaluated on Lumina-Next, the method achieves a 2.11× speedup while improving ImageReward by 11.9%, and consistently enhances both generation efficiency and aesthetic quality across multiple DiT models.

Technology Category

Application Category

📝 Abstract
Diffusion Transformers (DiTs) are a dominant backbone for high-fidelity text-to-image generation due to strong scalability and alignment at high resolutions. However, quadratic self-attention over dense spatial tokens leads to high inference latency and limits deployment. We observe that denoising is spatially non-uniform with respect to aesthetic descriptors in the prompt. Regions associated with aesthetic tokens receive concentrated cross-attention and show larger temporal variation, while low-affinity regions evolve smoothly with redundant computation. Based on this insight, we propose AccelAes, a training-free framework that accelerates DiTs through aesthetics-aware spatio-temporal reduction while improving perceptual aesthetics. AccelAes builds AesMask, a one-shot aesthetic focus mask derived from prompt semantics and cross-attention signals. When localized computation is feasible, SkipSparse reallocates computation and guidance to masked regions. We further reduce temporal redundancy using a lightweight step-level prediction cache that periodically replaces full Transformer evaluations. Experiments on representative DiT families show consistent acceleration and improved aesthetics-oriented quality. On Lumina-Next, AccelAes achieves a 2.11$\times$ speedup and improves ImageReward by +11.9% over the dense baseline. Code is available at https://github.com/xuanhuayin/AccelAes.
Problem

Research questions and friction points this paper is trying to address.

Diffusion Transformers
inference latency
aesthetic-enhanced image generation
spatio-temporal redundancy
text-to-image generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion Transformers
training-free acceleration
aesthetic-aware masking
spatio-temporal redundancy reduction
cross-attention guidance
🔎 Similar Papers
No similar papers found.