🤖 AI Summary
This work addresses the challenge of efficiently integrating external pretrained representations into diffusion models to enhance both generation quality and training efficiency. Methodologically, it proposes a flexible representation-guided framework that: (1) constructs cross-modal sample pairs to jointly model pretrained representations and the diffusion process; and (2) introduces a curriculum-based training strategy—incorporating denoising decomposition, multimodal alignment, and representation alignment—to jointly optimize representation learning and generative capability. Evaluated on ImageNet (256×256), protein sequence, and molecular generation tasks, the method achieves substantial improvements: training is 23.3× faster than SiT-XL and 4× faster than REPA, while yielding superior generation quality. The core contribution lies in the first systematic integration of pretrained representation knowledge into diffusion backbones via a scalable, task-adaptive embedding mechanism—effectively balancing computational efficiency and generalization across diverse domains.
📝 Abstract
Diffusion models can be improved with additional guidance towards more effective representations of input. Indeed, prior empirical work has already shown that aligning internal representations of the diffusion model with those of pre-trained models improves generation quality. In this paper, we present a systematic framework for incorporating representation guidance into diffusion models. We provide alternative decompositions of denoising models along with their associated training criteria, where the decompositions determine when and how the auxiliary representations are incorporated. Guided by our theoretical insights, we introduce two new strategies for enhancing representation alignment in diffusion models. First, we pair examples with target representations either derived from themselves or arisen from different synthetic modalities, and subsequently learn a joint model over the multimodal pairs. Second, we design an optimal training curriculum that balances representation learning and data generation. Our experiments across image, protein sequence, and molecule generation tasks demonstrate superior performance as well as accelerated training. In particular, on the class-conditional ImageNet $256 imes 256$ benchmark, our guidance results in $23.3$ times faster training than the original SiT-XL as well as four times speedup over the state-of-the-art method REPA. The code is available at https://github.com/ChenyuWang-Monica/REED.