Grounding Everything in Tokens for Multimodal Large Language Models

πŸ“… 2025-12-11
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Multimodal large language models (MLLMs) face inherent limitations in precise 2D object localization due to their reliance on autoregressive Transformer architectures, which process visual information sequentially rather than natively modeling spatial structure. To address this, we propose GETokβ€”a learnable token-based spatial representation method that introduces a novel collaborative mechanism between grid tokens and iterative offset tokens, directly encoding 2D spatial relationships into language tokens without modifying the underlying model architecture. GETok comprises learnable grid initialization, spatially aware token embedding, and end-to-end training, fully compatible with both supervised fine-tuning and reinforcement learning paradigms. On spatially sensitive tasks such as referring expression comprehension, GETok achieves state-of-the-art performance, significantly improving localization accuracy and cross-dataset generalization over existing methods.

Technology Category

Application Category

πŸ“ Abstract
Multimodal large language models (MLLMs) have made significant advancements in vision understanding and reasoning. However, the autoregressive Transformer architecture used by MLLMs requries tokenization on input images, which limits their ability to accurately ground objects within the 2D image space. This raises an important question: how can sequential language tokens be improved to better ground objects in 2D spatial space for MLLMs? To address this, we present a spatial representation method for grounding objects, namely GETok, that integrates a specialized vocabulary of learnable tokens into MLLMs. GETok first uses grid tokens to partition the image plane into structured spatial anchors, and then exploits offset tokens to enable precise and iterative refinement of localization predictions. By embedding spatial relationships directly into tokens, GETok significantly advances MLLMs in native 2D space reasoning without modifying the autoregressive architecture. Extensive experiments demonstrate that GETok achieves superior performance over the state-of-the-art methods across various referring tasks in both supervised fine-tuning and reinforcement learning settings.
Problem

Research questions and friction points this paper is trying to address.

Improves object grounding in 2D space for MLLMs
Enhances spatial reasoning without altering autoregressive architecture
Addresses limitations of tokenization in multimodal language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Grid tokens partition image into spatial anchors
Offset tokens enable iterative refinement of localization
Embed spatial relationships directly into tokens for reasoning
πŸ”Ž Similar Papers
No similar papers found.
Xiangxuan Ren
Xiangxuan Ren
Shanghai Jiao Tong University
Computer Vision
Zhongdao Wang
Zhongdao Wang
Noah's Ark Lab, Huawei
computer visionautonomous driving
L
Liping Hou
Huawei Noah’s Ark Lab
Pin Tang
Pin Tang
Shanghai Jiao Tong University
Computer VisionAutonomous DrivingMedical Image Analysis
G
Guoqing Wang
MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
C
Chao Ma
MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University