RecTok: Reconstruction Distillation along Rectified Flow

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing high-dimensional visual tokenizers in diffusion models suffer from a fundamental trade-off between reconstruction fidelity and generative quality. To address this, we propose a novel reconstruction distillation paradigm grounded in correction flow paths, introducing two key innovations: flow-semantic distillation and reconstruction-alignment distillation. Unlike conventional approaches that operate in latent space, our method injects semantic knowledge from vision foundation models directly into the forward flow trajectory—bypassing inherent limitations of high-dimensional tokenizers. Built upon the Flow Matching framework, our approach integrates Vision Flow Matching (VFM) semantic distillation, masked feature reconstruction loss, and classifier-free guidance. Extensive experiments demonstrate state-of-the-art performance on gFID-50K, with significant improvements in high-dimensional reconstruction accuracy, generative quality, and discriminative capability. Notably, our method achieves, for the first time, synergistic optimization of high-fidelity reconstruction and strong semantic representation.

Technology Category

Application Category

📝 Abstract
Visual tokenizers play a crucial role in diffusion models. The dimensionality of latent space governs both reconstruction fidelity and the semantic expressiveness of the latent feature. However, a fundamental trade-off is inherent between dimensionality and generation quality, constraining existing methods to low-dimensional latent spaces. Although recent works have leveraged vision foundation models to enrich the semantics of visual tokenizers and accelerate convergence, high-dimensional tokenizers still underperform their low-dimensional counterparts. In this work, we propose RecTok, which overcomes the limitations of high-dimensional visual tokenizers through two key innovations: flow semantic distillation and reconstruction--alignment distillation. Our key insight is to make the forward flow in flow matching semantically rich, which serves as the training space of diffusion transformers, rather than focusing on the latent space as in previous works. Specifically, our method distills the semantic information in VFMs into the forward flow trajectories in flow matching. And we further enhance the semantics by introducing a masked feature reconstruction loss. Our RecTok achieves superior image reconstruction, generation quality, and discriminative performance. It achieves state-of-the-art results on the gFID-50K under both with and without classifier-free guidance settings, while maintaining a semantically rich latent space structure. Furthermore, as the latent dimensionality increases, we observe consistent improvements. Code and model are available at https://shi-qingyu.github.io/rectok.github.io.
Problem

Research questions and friction points this paper is trying to address.

Overcomes trade-off between latent space dimensionality and generation quality
Enhances semantics in high-dimensional visual tokenizers for diffusion models
Improves image reconstruction and generation via flow semantic distillation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distills VFM semantics into flow matching trajectories
Introduces masked feature reconstruction loss for enhancement
Uses forward flow as training space for diffusion transformers
🔎 Similar Papers
No similar papers found.