🤖 AI Summary
Existing image generation methods rely on fixed image tokenizers, yet standard evaluation metrics—such as rFID—fail to characterize tokenizer performance or correlate reliably with generation quality (gFID). This stems from a fundamental inconsistency between reconstruction fidelity and generation quality in discrete latent spaces.
Method: We identify sampling error in discrete tokenization as the root cause and propose a latent perturbation framework that explicitly models token sampling noise as structured perturbations in the latent space, enhancing robustness of frozen tokenizers. We further introduce a plug-and-play tokenizer co-optimization paradigm.
Contribution/Results: We propose pFID—the first metric demonstrating strong correlation (ρ > 0.95) between tokenizer performance and gFID. On a 400M-parameter autoregressive generator, our method achieves gFID scores of 1.60 (with classifier-free guidance) and 3.45 (without), substantially outperforming baselines. Results are rigorously validated across 11 distinct tokenizers and 2 AR architectures.
📝 Abstract
Recent image generation schemes typically capture image distribution in a pre-constructed latent space relying on a frozen image tokenizer. Though the performance of tokenizer plays an essential role to the successful generation, its current evaluation metrics (e.g. rFID) fail to precisely assess the tokenizer and correlate its performance to the generation quality (e.g. gFID). In this paper, we comprehensively analyze the reason for the discrepancy of reconstruction and generation qualities in a discrete latent space, and, from which, we propose a novel plug-and-play tokenizer training scheme to facilitate latent space construction. Specifically, a latent perturbation approach is proposed to simulate sampling noises, i.e., the unexpected tokens sampled, from the generative process. With the latent perturbation, we further propose (1) a novel tokenizer evaluation metric, i.e., pFID, which successfully correlates the tokenizer performance to generation quality and (2) a plug-and-play tokenizer training scheme, which significantly enhances the robustness of tokenizer thus boosting the generation quality and convergence speed. Extensive benchmarking are conducted with 11 advanced discrete image tokenizers with 2 autoregressive generation models to validate our approach. The tokenizer trained with our proposed latent perturbation achieve a notable 1.60 gFID with classifier-free guidance (CFG) and 3.45 gFID without CFG with a $sim$400M generator. Code: https://github.com/lxa9867/ImageFolder.