Rethinking MLLM Itself as a Segmenter with a Single Segmentation Token

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes SELF1E, a novel framework that enables high-quality image segmentation within multimodal large language models (MLLMs) using only a single segmentation token, eliminating the need for external dedicated mask decoders or multi-token mechanisms. By preserving original-resolution image features, integrating residual information, introducing a pixel unshuffle operation, and employing a dual-path attention mechanism, SELF1E effectively enhances feature resolution and cross-modal interaction. Experimental results demonstrate that SELF1E achieves performance comparable to specialized decoder-based approaches across multiple segmentation benchmarks, establishing the feasibility of decoder-agnostic segmentation in MLLMs.

Technology Category

Application Category

📝 Abstract
Recent segmentation methods leveraging Multi-modal Large Language Models (MLLMs) have shown reliable object-level segmentation and enhanced spatial perception. However, almost all previous methods predominantly rely on specialist mask decoders to interpret masks from generated segmentation-related embeddings and visual features, or incorporate multiple additional tokens to assist. This paper aims to investigate whether and how we can unlock segmentation from MLLM itSELF with 1 segmentation Embedding (SELF1E) while achieving competitive results, which eliminates the need for external decoders. To this end, our approach targets the fundamental limitation of resolution reduction in pixel-shuffled image features from MLLMs. First, we retain image features at their original uncompressed resolution, and refill them with residual features extracted from MLLM-processed compressed features, thereby improving feature precision. Subsequently, we integrate pixel-unshuffle operations on image features with and without LLM processing, respectively, to unleash the details of compressed features and amplify the residual features under uncompressed resolution, which further enhances the resolution of refilled features. Moreover, we redesign the attention mask with dual perception pathways, i.e., image-to-image and image-to-segmentation, enabling rich feature interaction between pixels and the segmentation token. Comprehensive experiments across multiple segmentation tasks validate that SELF1E achieves performance competitive with specialist mask decoder-based methods, demonstrating the feasibility of decoder-free segmentation in MLLMs. Project page: https://github.com/ANDYZAQ/SELF1E.
Problem

Research questions and friction points this paper is trying to address.

MLLM
segmentation
decoder-free
single token
image segmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

decoder-free segmentation
Multi-modal Large Language Model (MLLM)
single segmentation token
pixel-unshuffle
dual perception attention
🔎 Similar Papers
No similar papers found.