Slot-MLLM: Object-Centric Visual Tokenization for Multimodal LLM

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language models (MLLMs) employ image tokenization strategies ill-suited for object-level fine-grained understanding and generation. To address this, we propose the first object-centric visual tokenization framework for MLLMs, introducing Slot Attention—previously used in unsupervised representation learning—to discrete token encoding of natural images, enabling semantic-aware object-level segmentation. Our slot tokenizer integrates a Q-Former encoder, a diffusion-based decoder, and residual vector quantization, jointly preserving local details and high-level semantics while natively conforming to the LLM’s next-token prediction paradigm. Evaluated on multiple vision-language tasks demanding localized, fine-grained comprehension and generation, our approach consistently outperforms mainstream visual tokenizers. Results demonstrate both the effectiveness and feasibility of object-centric tokenization in MLLMs.

Technology Category

Application Category

📝 Abstract
Recently, multimodal large language models (MLLMs) have emerged as a key approach in achieving artificial general intelligence. In particular, vision-language MLLMs have been developed to generate not only text but also visual outputs from multimodal inputs. This advancement requires efficient image tokens that LLMs can process effectively both in input and output. However, existing image tokenization methods for MLLMs typically capture only global abstract concepts or uniformly segmented image patches, restricting MLLMs' capability to effectively understand or generate detailed visual content, particularly at the object level. To address this limitation, we propose an object-centric visual tokenizer based on Slot Attention specifically for MLLMs. In particular, based on the Q-Former encoder, diffusion decoder, and residual vector quantization, our proposed discretized slot tokens can encode local visual details while maintaining high-level semantics, and also align with textual data to be integrated seamlessly within a unified next-token prediction framework of LLMs. The resulting Slot-MLLM demonstrates significant performance improvements over baselines with previous visual tokenizers across various vision-language tasks that entail local detailed comprehension and generation. Notably, this work is the first demonstration of the feasibility of object-centric slot attention performed with MLLMs and in-the-wild natural images.
Problem

Research questions and friction points this paper is trying to address.

Existing image tokenization lacks object-level detail understanding.
Current methods fail to align visual and textual data effectively.
Need for efficient object-centric visual tokens in MLLMs.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Object-centric visual tokenizer using Slot Attention
Combines Q-Former encoder and diffusion decoder
Discretized slot tokens for detailed visual encoding