🤖 AI Summary
This study investigates key factors governing generative performance in Representation Alignment (REPA) for diffusion models and identifies that spatial structure of target representations—e.g., patch-level cosine similarity—is more decisive than global semantic metrics such as classification accuracy. To address this, we propose iREPA: a lightweight, efficient enhancement introducing only a convolutional projection and spatial normalization layer (<4 lines of code) to explicitly model and strengthen spatial relational transfer. We systematically evaluate iREPA across 27 vision encoders, multiple model scales, and mainstream REPA variants—including REPA-E, MeanFlow, and JiT—demonstrating consistent acceleration in training convergence and substantial improvements in generation quality. This work provides the first empirical evidence that spatial structural fidelity is central to REPA’s effectiveness. Moreover, iREPA establishes a concise, general-purpose, plug-and-play paradigm for structure-aware representation alignment, requiring no architectural modifications or additional supervision.
📝 Abstract
Representation alignment (REPA) guides generative training by distilling representations from a strong, pretrained vision encoder to intermediate diffusion features. We investigate a fundamental question: what aspect of the target representation matters for generation, its extit{global}
evision{semantic} information (e.g., measured by ImageNet-1K accuracy) or its spatial structure (i.e. pairwise cosine similarity between patch tokens)? Prevalent wisdom holds that stronger global semantic performance leads to better generation as a target representation. To study this, we first perform a large-scale empirical analysis across 27 different vision encoders and different model scales. The results are surprising; spatial structure, rather than global performance, drives the generation performance of a target representation. To further study this, we introduce two straightforward modifications, which specifically accentuate the transfer of emph{spatial} information. We replace the standard MLP projection layer in REPA with a simple convolution layer and introduce a spatial normalization layer for the external representation. Surprisingly, our simple method (implemented in $<$4 lines of code), termed iREPA, consistently improves convergence speed of REPA, across a diverse set of vision encoders, model sizes, and training variants (such as REPA, REPA-E, Meanflow, JiT etc). %, etc. Our work motivates revisiting the fundamental working mechanism of representational alignment and how it can be leveraged for improved training of generative models. The code and project page are available at https://end2end-diffusion.github.io/irepa