🤖 AI Summary
This work addresses the semantic ambiguity and dimensional redundancy in conditional embeddings of diffusion Transformers. We reveal for the first time that semantic information in these high-dimensional embeddings is highly concentrated in a small subset of dimensions, exhibiting extreme angular similarity and forming a "semantic bottleneck." To exploit this insight, we propose a structured pruning method based on angular similarity analysis and dimension importance evaluation. Our approach successfully removes up to two-thirds of the embedding dimensions while preserving or even enhancing generation quality, thereby significantly improving the efficiency of conditional modeling. These results empirically validate the high concentration of semantic content within conditional embeddings.
📝 Abstract
Diffusion Transformers have achieved state-of-the-art performance in class-conditional and multimodal generation, yet the structure of their learned conditional embeddings remains poorly understood. In this work, we present the first systematic study of these embeddings and uncover a notable redundancy: class-conditioned embeddings exhibit extreme angular similarity, exceeding 99\% on ImageNet-1K, while continuous-condition tasks such as pose-guided image generation and video-to-audio generation reach over 99.9\%. We further find that semantic information is concentrated in a small subset of dimensions, with head dimensions carrying the dominant signal and tail dimensions contributing minimally. By pruning low-magnitude dimensions--removing up to two-thirds of the embedding space--we show that generation quality and fidelity remain largely unaffected, and in some cases improve. These results reveal a semantic bottleneck in Transformer-based diffusion models, providing new insights into how semantics are encoded and suggesting opportunities for more efficient conditioning mechanisms.