Training-Free Representation Guidance for Diffusion Models with a Representation Alignment Projector

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the issue of semantic drift during early denoising stages in diffusion models, which often leads to semantically inconsistent generations. To mitigate this, the authors propose a training-free, inference-time guidance mechanism that dynamically injects unsupervised visual features as semantic anchors via a representation alignment projector at intermediate sampling steps. The method requires no architectural modifications and seamlessly integrates with classifier-free guidance, making it compatible with diffusion Transformers such as SiT and REPA. Evaluated on class-conditional ImageNet generation, the approach significantly improves both semantic consistency and visual fidelity: REPA-XL/2 achieves a notable reduction in FID from 5.9 to 3.3, outperforming existing representation-based guidance techniques.

Technology Category

Application Category

📝 Abstract
Recent progress in generative modeling has enabled high-quality visual synthesis with diffusion-based frameworks, supporting controllable sampling and large-scale training. Inference-time guidance methods such as classifier-free and representative guidance enhance semantic alignment by modifying sampling dynamics; however, they do not fully exploit unsupervised feature representations. Although such visual representations contain rich semantic structure, their integration during generation is constrained by the absence of ground-truth reference images at inference. This work reveals semantic drift in the early denoising stages of diffusion transformers, where stochasticity results in inconsistent alignment even under identical conditioning. To mitigate this issue, we introduce a guidance scheme using a representation alignment projector that injects representations predicted by a projector into intermediate sampling steps, providing an effective semantic anchor without modifying the model architecture. Experiments on SiTs and REPAs show notable improvements in class-conditional ImageNet synthesis, achieving substantially lower FID scores; for example, REPA-XL/2 improves from 5.9 to 3.3, and the proposed method outperforms representative guidance when applied to SiT models. The approach further yields complementary gains when combined with classifier-free guidance, demonstrating enhanced semantic coherence and visual fidelity. These results establish representation-informed diffusion sampling as a practical strategy for reinforcing semantic preservation and image consistency.
Problem

Research questions and friction points this paper is trying to address.

semantic drift
diffusion models
representation guidance
inference-time guidance
semantic alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

representation guidance
diffusion models
training-free
semantic alignment
representation alignment projector
🔎 Similar Papers
No similar papers found.