🤖 AI Summary
Existing multimodal large language models (MLLMs) rely on textualized visual outputs (e.g., coordinate generation), limiting their capability for dense prediction tasks such as segmentation. To address this, we propose Patch-as-Decodable-Token (PaDT), the first framework unifying detection, segmentation, and referring expression comprehension within a single MLLM architecture. PaDT introduces learnable, decodable visual reference tokens (VRTs) that dynamically map image patch embeddings into visual tokens co-processed alongside text tokens. A lightweight decoder directly transforms LLM hidden states into pixel-level predictions, while token-wise cross-entropy loss and stochastic VRT sampling enable efficient training. Evaluated on four mainstream visual perception benchmarks, PaDT achieves state-of-the-art performance—outperforming significantly larger MLLMs—demonstrating that native dense visual understanding is feasible without sacrificing language modeling capabilities. This work establishes a new paradigm for end-to-end, unified multimodal perception in large language models.
📝 Abstract
Multimodal large language models (MLLMs) have advanced rapidly in recent years. However, existing approaches for vision tasks often rely on indirect representations, such as generating coordinates as text for detection, which limits performance and prevents dense prediction tasks like segmentation. To overcome these challenges, we introduce Patch-as-Decodable Token (PaDT), a unified paradigm that enables MLLMs to directly generate both textual and diverse visual outputs. Central to PaDT are Visual Reference Tokens (VRTs), derived from visual patch embeddings of query images and interleaved seamlessly with LLM's output textual tokens. A lightweight decoder then transforms LLM's outputs into detection, segmentation, and grounding predictions. Unlike prior methods, PaDT processes VRTs independently at each forward pass and dynamically expands the embedding table, thus improving localization and differentiation among similar objects. We further tailor a training strategy for PaDT by randomly selecting VRTs for supervised fine-tuning and introducing a robust per-token cross-entropy loss. Our empirical studies across four visual perception and understanding tasks suggest PaDT consistently achieving state-of-the-art performance, even compared with significantly larger MLLM models. The code is available at https://github.com/Gorilla-Lab-SCUT/PaDT.