🤖 AI Summary
Existing discrete visual generation methods are constrained by low-dimensional latent spaces, struggling to simultaneously preserve the semantic richness of high-dimensional pretrained representations and achieve strong generative capabilities. This work proposes CubiD, the first model to enable fine-grained masked prediction in a high-dimensional discrete space (768–1024 dimensions), where any dimension at any spatial location can be independently masked and accurately reconstructed from partial observations, effectively capturing strong cross-spatial and cross-dimensional dependencies. Built upon a discrete diffusion mechanism and a unified multimodal architecture, CubiD achieves efficient inference with a fixed number of generation steps, attaining state-of-the-art performance among discrete generative models on ImageNet-256. The model further demonstrates excellent scalability—from 0.9B to 3.7B parameters—and strong representation universality while retaining the discriminative power of the original visual features.
📝 Abstract
Visual generation with discrete tokens has gained significant attention as it enables a unified token prediction paradigm shared with language models, promising seamless multimodal architectures. However, current discrete generation methods remain limited to low-dimensional latent tokens (typically 8-32 dims), sacrificing the semantic richness essential for understanding. While high-dimensional pretrained representations (768-1024 dims) could bridge this gap, their discrete generation poses fundamental challenges. In this paper, we present Cubic Discrete Diffusion (CubiD), the first discrete generation model for high-dimensional representations. CubiD performs fine-grained masking throughout the high-dimensional discrete representation -- any dimension at any position can be masked and predicted from partial observations. This enables the model to learn rich correlations both within and across spatial positions, with the number of generation steps fixed at $T$ regardless of feature dimensionality, where $T \ll hwd$. On ImageNet-256, CubiD achieves state-of-the-art discrete generation with strong scaling behavior from 900M to 3.7B parameters. Crucially, we validate that these discretized tokens preserve original representation capabilities, demonstrating that the same discrete tokens can effectively serve both understanding and generation tasks. We hope this work will inspire future research toward unified multimodal architectures. Code is available at: https://github.com/YuqingWang1029/CubiD.