Latent Denoising Makes Good Visual Tokenizers

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Visual tokenizers are critical for generative modeling, yet their design lacks explicit alignment with the denoising reconstruction objective. This paper introduces a latent denoising-driven tokenizer design paradigm—unifying tokenizer training with the core generative task of reconstructing clean latent representations from noisy or masked inputs for the first time. To this end, we propose the Latent Denoising Tokenizer (l-DeTok), an autoencoder-based architecture jointly optimized with interpolation-based Gaussian noise injection and random masking reconstruction losses, thereby significantly enhancing token embeddability and robustness. Evaluated on ImageNet at 256×256 resolution, l-DeTok consistently outperforms standard tokenizers across six state-of-the-art generative models, yielding substantial improvements in generation quality. Our approach establishes a principled framework for tokenizer design grounded in the fundamental denoising objective of latent diffusion and masked autoencoding models.

Technology Category

Application Category

📝 Abstract
Despite their fundamental role, it remains unclear what properties could make visual tokenizers more effective for generative modeling. We observe that modern generative models share a conceptually similar training objective -- reconstructing clean signals from corrupted inputs such as Gaussian noise or masking -- a process we term denoising. Motivated by this insight, we propose aligning tokenizer embeddings directly with the downstream denoising objective, encouraging latent embeddings to be more easily reconstructed even when heavily corrupted. To achieve this, we introduce the Latent Denoising Tokenizer (l-DeTok), a simple yet effective tokenizer trained to reconstruct clean images from latent embeddings corrupted by interpolative noise and random masking. Extensive experiments on ImageNet 256x256 demonstrate that our tokenizer consistently outperforms standard tokenizers across six representative generative models. Our findings highlight denoising as a fundamental design principle for tokenizer development, and we hope it could motivate new perspectives for future tokenizer design.
Problem

Research questions and friction points this paper is trying to address.

Align tokenizer embeddings with denoising objective
Improve reconstruction of corrupted latent embeddings
Enhance tokenizer performance for generative models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Align tokenizer embeddings with denoising objective
Introduce Latent Denoising Tokenizer (l-DeTok)
Train tokenizer to reconstruct images from corrupted embeddings
🔎 Similar Papers