🤖 AI Summary
This work addresses the limitations of Diffusion Transformers in discriminative representation learning, which stem from inefficient timestep searching and underutilized representational capacity. To overcome these challenges, the authors propose A-SelecT, a novel method that, for the first time, enables adaptive identification of the most informative timesteps within a single forward pass—eliminating the need for exhaustive search while simultaneously achieving high computational efficiency and superior representation quality. By integrating a feature importance evaluation mechanism, A-SelecT significantly outperforms existing diffusion-based approaches on standard image classification and segmentation benchmarks, effectively breaking through the performance bottleneck that has hindered diffusion models in discriminative tasks.
📝 Abstract
Diffusion models have significantly reshaped the field of generative artificial intelligence and are now increasingly explored for their capacity in discriminative representation learning. Diffusion Transformer (DiT) has recently gained attention as a promising alternative to conventional U-Net-based diffusion models, demonstrating a promising avenue for downstream discriminative tasks via generative pre-training. However, its current training efficiency and representational capacity remain largely constrained due to the inadequate timestep searching and insufficient exploitation of DiT-specific feature representations. In light of this view, we introduce Automatically Selected Timestep (A-SelecT) that dynamically pinpoints DiT's most information-rich timestep from the selected transformer feature in a single run, eliminating the need for both computationally intensive exhaustive timestep searching and suboptimal discriminative feature selection. Extensive experiments on classification and segmentation benchmarks demonstrate that DiT, empowered by A-SelecT, surpasses all prior diffusion-based attempts efficiently and effectively.