🤖 AI Summary
Existing vector-quantized masked image modeling (MIM) approaches struggle to jointly optimize generation quality, representation capability, and computational efficiency within a shared latent space. This paper proposes a unified framework achieving synergistic optimization through three core innovations: (1) introducing token merging into VQ-based generative models—decoupling semantic aggregation from quantization; (2) proposing lookup-free quantization (LFQ) with globally aligned joint training, eliminating codebook dependency while enhancing semantic consistency; and (3) designing the MergeAR module, which integrates token merging with KV-cache compression to accelerate autoregressive generation. Evaluated on ImageNet, our method achieves state-of-the-art performance in both representation learning and image generation. It significantly improves token efficiency, inference speed, and transferability to downstream tasks.
📝 Abstract
Masked Image Modeling (MIM) with Vector Quantization (VQ) has achieved great success in both self-supervised pre-training and image generation. However, most existing methods struggle to address the trade-off in shared latent space for generation quality vs. representation learning and efficiency. To push the limits of this paradigm, we propose MergeVQ, which incorporates token merging techniques into VQ-based generative models to bridge the gap between image generation and visual representation learning in a unified architecture. During pre-training, MergeVQ decouples top-k semantics from latent space with the token merge module after self-attention blocks in the encoder for subsequent Look-up Free Quantization (LFQ) and global alignment and recovers their fine-grained details through cross-attention in the decoder for reconstruction. As for the second-stage generation, we introduce MergeAR, which performs KV Cache compression for efficient raster-order prediction. Extensive experiments on ImageNet verify that MergeVQ as an AR generative model achieves competitive performance in both visual representation learning and image generation tasks while maintaining favorable token efficiency and inference speed. The code and model will be available at https://apexgen-x.github.io/MergeVQ.