V-Co: A Closer Look at Visual Representation Alignment via Co-Denoising

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing pixel-level diffusion models struggle to effectively capture high-level visual structures due to the absence of strong semantic supervision. This work systematically investigates visual cooperative denoising within a unified Just-in-Time (JiT) framework and introduces four key design principles: a dual-stream cooperative denoising architecture, structured unconditional prediction, a perceptual drift-mixed loss, and RMS feature rescaling. The proposed method achieves substantial improvements in both generation quality and training efficiency without relying on classifier guidance. Notably, it significantly outperforms current pixel-based diffusion models on ImageNet-256 while requiring fewer training epochs.

Technology Category

Application Category

📝 Abstract
Pixel-space diffusion has recently re-emerged as a strong alternative to latent diffusion, enabling high-quality generation without pretrained autoencoders. However, standard pixel-space diffusion models receive relatively weak semantic supervision and are not explicitly designed to capture high-level visual structure. Recent representation-alignment methods (e.g., REPA) suggest that pretrained visual features can substantially improve diffusion training, and visual co-denoising has emerged as a promising direction for incorporating such features into the generative process. However, existing co-denoising approaches often entangle multiple design choices, making it unclear which design choices are truly essential. Therefore, we present V-Co, a systematic study of visual co-denoising in a unified JiT-based framework. This controlled setting allows us to isolate the ingredients that make visual co-denoising effective. Our study reveals four key ingredients for effective visual co-denoising. First, preserving feature-specific computation while enabling flexible cross-stream interaction motivates a fully dual-stream architecture. Second, effective classifier-free guidance (CFG) requires a structurally defined unconditional prediction. Third, stronger semantic supervision is best provided by a perceptual-drifting hybrid loss. Fourth, stable co-denoising further requires proper cross-stream calibration, which we realize through RMS-based feature rescaling. Together, these findings yield a simple recipe for visual co-denoising. Experiments on ImageNet-256 show that, at comparable model sizes, V-Co outperforms the underlying pixel-space diffusion baseline and strong prior pixel-diffusion methods while using fewer training epochs, offering practical guidance for future representation-aligned generative models.
Problem

Research questions and friction points this paper is trying to address.

visual co-denoising
representation alignment
pixel-space diffusion
semantic supervision
generative models
Innovation

Methods, ideas, or system contributions that make the work stand out.

visual co-denoising
dual-stream architecture
classifier-free guidance
perceptual-drifting loss
feature rescaling
🔎 Similar Papers
2024-08-29arXiv.orgCitations: 7