DenseMLLM: Standard Multimodal LLMs are Intrinsic Dense Predictors

📅 2026-02-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Standard multimodal large language models (MLLMs) struggle to perform fine-grained dense prediction tasks directly and typically rely on task-specific decoders, which compromises their generality. This work proposes a universal, architecture-agnostic approach that enables standard MLLMs to execute dense prediction tasks—such as semantic segmentation and depth estimation—in an end-to-end manner through a multi-label, multi-task visual token supervision mechanism. By eliminating the need for custom-designed decoders, the method preserves model simplicity while fully unlocking the inherent dense perception capabilities of MLLMs. Extensive experiments demonstrate that this approach achieves highly competitive performance across multiple dense prediction and vision-language benchmarks.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have demonstrated exceptional capabilities in high-level visual understanding. However, extending these models to fine-grained dense prediction tasks, such as semantic segmentation and depth estimation, typically necessitates the incorporation of complex, task-specific decoders and other customizations. This architectural fragmentation increases model complexity and deviates from the generalist design of MLLMs, ultimately limiting their practicality. In this work, we challenge this paradigm by accommodating standard MLLMs to perform dense predictions without requiring additional task-specific decoders. The proposed model is called DenseMLLM, grounded in the standard architecture with a novel vision token supervision strategy for multiple labels and tasks. Despite its minimalist design, our model achieves highly competitive performance across a wide range of dense prediction and vision-language benchmarks, demonstrating that a standard, general-purpose MLLM can effectively support dense perception without architectural specialization.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Large Language Models
Dense Prediction
Semantic Segmentation
Depth Estimation
Architectural Fragmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dense Prediction
Multimodal LLM
Vision Token Supervision
Generalist Architecture
Semantic Segmentation
🔎 Similar Papers
No similar papers found.
Yi Li
Yi Li
The Hong Kong University of Science and Technology
MLLMCV
H
Hongze Shen
Tencent, Youtu-Lab, China
L
Lexiang Tang
Tencent, Youtu-Lab, China
Xin Li
Xin Li
Tencent Youtu Lab
X
Xinpeng Ding
Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
Y
Yinsong Liu
Tencent, Youtu-Lab, China
Deqiang Jiang
Deqiang Jiang
腾讯优图实验室
Xing Sun
Xing Sun
Tencent Youtu Lab
LLMMLLMAgent
Xiaomeng Li
Xiaomeng Li
Assistant Professor, The Hong Kong University of Science and Technology
Medical Image AnalysisAI in HealthcareDeep Learning