🤖 AI Summary
This work proposes COMiT, a novel framework that introduces a communication mechanism into visual tokenization to generate interpretable, object-centric discrete token sequences with explicit semantic structure—addressing the limitation of existing image discretization methods that primarily focus on reconstruction and compression. Within a fixed computational budget, COMiT iteratively attends to local image regions and recursively updates the token sequence through a unified Transformer architecture. The model is trained end-to-end using a combination of flow-matching reconstruction loss and semantic alignment loss. The resulting token sequences exhibit clear object-level semantics and demonstrate significant improvements over current approaches in compositional generalization and relational reasoning tasks.
📝 Abstract
Discrete image tokenizers have emerged as a key component of modern vision and multimodal systems, providing a sequential interface for transformer-based architectures. However, most existing approaches remain primarily optimized for reconstruction and compression, often yielding tokens that capture local texture rather than object-level semantic structure. Inspired by the incremental and compositional nature of human communication, we introduce COMmunication inspired Tokenization (COMiT), a framework for learning structured discrete visual token sequences. COMiT constructs a latent message within a fixed token budget by iteratively observing localized image crops and recurrently updating its discrete representation. At each step, the model integrates new visual information while refining and reorganizing the existing token sequence. After several encoding iterations, the final message conditions a flow-matching decoder that reconstructs the full image. Both encoding and decoding are implemented within a single transformer model and trained end-to-end using a combination of flow-matching reconstruction and semantic representation alignment losses. Our experiments demonstrate that while semantic alignment provides grounding, attentive sequential tokenization is critical for inducing interpretable, object-centric token structure and substantially improving compositional generalization and relational reasoning over prior methods.