🤖 AI Summary
This work addresses the quadratic growth in computational and memory costs of multi-conditional Diffusion Transformers (DiTs) caused by the “concatenate-and-attend” mechanism, which introduces substantial spatial and semantic redundancy in cross-modal interactions. To this end, the authors propose the Position-aligned and Keyword-aware Attention (PKA) framework, which eliminates spatial redundancy through local patch alignment, suppresses semantic redundancy via a semantic-aware mask that prunes irrelevant keywords, and accelerates training convergence with a condition-sensitive sampling strategy. PKA is the first approach to systematically identify and remove redundant attention in multi-conditional DiTs, achieving linear complexity in modeling. Experiments demonstrate that PKA maintains high-fidelity multi-conditional generation while delivering a 10× inference speedup and 5.1× GPU memory reduction, significantly enhancing model efficiency and scalability.
📝 Abstract
While modern text-to-image models excel at prompt-based generation, they often lack the fine-grained control necessary for specific user requirements like spatial layouts or subject appearances. Multi-condition control addresses this, yet its integration into Diffusion Transformers (DiTs) is bottlenecked by the conventional ``concatenate-and-attend''strategy, which suffers from quadratic computational and memory overhead as the number of conditions scales. Our analysis reveals that much of this cross-modal interaction is spatially or semantically redundant. To this end, we propose Position-aligned and Keyword-scoped Attention (PKA), a highly efficient framework designed to eliminate these redundancies. Specifically, Position-Aligned Attention (PAA) linearizes spatial control by enforcing localized patch alignment, while Keyword-Scoped Attention (KSA) prunes irrelevant subject-driven interactions via semantic-aware masking. To facilitate efficient learning, we further introduce a Conditional Sensitivity-Aware Sampling (CSAS) strategy that reweights the training objective towards critical denoising phases, drastically accelerating convergence and enhancing conditional fidelity. Empirically, PKA delivers a 10.0$\times$ inference speedup and a 5.1$\times$ VRAM saving, providing a scalable and resource-friendly solution for high-fidelity multi-conditioned generation.