🤖 AI Summary
Existing visual tokenization methods rely on autoencoder (AE)-based single-step reconstruction, struggling to balance compression efficiency and generation quality for high-dimensional visual data. This paper introduces a novel “denoising-as-decoding” paradigm, replacing the conventional AE decoder with a diffusion model that iteratively denoises latent representations extracted by the encoder, enabling progressive reconstruction from latent to pixel space. By deeply integrating variational autoencoding with diffusion modeling, our approach overcomes the limitations of single-step reconstruction and jointly optimizes compression fidelity and generative capability. Experiments demonstrate significant improvements: reconstruction fidelity (rFID) is substantially enhanced; downstream generation quality, measured by FID, improves by 22%; and inference speed increases by 2.3× compared to baseline methods.
📝 Abstract
In generative modeling, tokenization simplifies complex data into compact, structured representations, creating a more efficient, learnable space. For high-dimensional visual data, it reduces redundancy and emphasizes key features for high-quality generation. Current visual tokenization methods rely on a traditional autoencoder framework, where the encoder compresses data into latent representations, and the decoder reconstructs the original input. In this work, we offer a new perspective by proposing denoising as decoding, shifting from single-step reconstruction to iterative refinement. Specifically, we replace the decoder with a diffusion process that iteratively refines noise to recover the original image, guided by the latents provided by the encoder. We evaluate our approach by assessing both reconstruction (rFID) and generation quality (FID), comparing it to state-of-the-art autoencoding approaches. By adopting iterative reconstruction through diffusion, our autoencoder, namely $epsilon$-VAE, achieves high reconstruction quality, which in turn enhances downstream generation quality by 22% and provides 2.3$ imes$ inference speedup. We hope this work offers new insights into integrating iterative generation and autoencoding for improved compression and generation.