Image Tokenizer Needs Post-Training

๐Ÿ“… 2025-09-15
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing image tokenizers optimize only reconstruction objectives, leading to severe misalignment between the reconstructed and generative distributions. Method: This paper proposes a two-stage tokenizer training framework: (1) primary training introduces latent-space perturbations to enhance robustness of latent representations; (2) post-training jointly optimizes the decoder to explicitly align the generative distribution with the reconstructed one. Contribution/Results: We introduce pFIDโ€”a novel metric quantifying generative-reconstruction distribution shiftโ€”and integrate discrete token modeling with generative model optimization, supporting both autoregressive and diffusion architectures. Evaluated on a 400M-parameter generator, our approach reduces gFID from 1.60 to 1.36, yielding significant improvements in generation quality and training convergence speed.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent image generative models typically capture the image distribution in a pre-constructed latent space, relying on a frozen image tokenizer. However, there exists a significant discrepancy between the reconstruction and generation distribution, where current tokenizers only prioritize the reconstruction task that happens before generative training without considering the generation errors during sampling. In this paper, we comprehensively analyze the reason for this discrepancy in a discrete latent space, and, from which, we propose a novel tokenizer training scheme including both main-training and post-training, focusing on improving latent space construction and decoding respectively. During the main training, a latent perturbation strategy is proposed to simulate sampling noises, ie, the unexpected tokens generated in generative inference. Specifically, we propose a plug-and-play tokenizer training scheme, which significantly enhances the robustness of tokenizer, thus boosting the generation quality and convergence speed, and a novel tokenizer evaluation metric, ie, pFID, which successfully correlates the tokenizer performance to generation quality. During post-training, we further optimize the tokenizer decoder regarding a well-trained generative model to mitigate the distribution difference between generated and reconstructed tokens. With a $sim$400M generator, a discrete tokenizer trained with our proposed main training achieves a notable 1.60 gFID and further obtains 1.36 gFID with the additional post-training. Further experiments are conducted to broadly validate the effectiveness of our post-training strategy on off-the-shelf discrete and continuous tokenizers, coupled with autoregressive and diffusion-based generators.
Problem

Research questions and friction points this paper is trying to address.

Addresses discrepancy between reconstruction and generation distributions in image tokenizers
Proposes main-training with latent perturbation to simulate sampling noises
Introduces post-training to optimize decoder for reduced distribution difference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Post-training tokenizer optimization for generation
Latent perturbation simulates sampling noise
Plug-and-play training enhances tokenizer robustness
๐Ÿ”Ž Similar Papers
No similar papers found.