What Do Visual Tokens Really Encode? Uncovering Sparsity and Redundancy in Multimodal Large Language Models

๐Ÿ“… 2026-02-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the unclear internal representation mechanisms of visual semantics in multimodal large language models (MLLMs), which often lead to redundant computation and inefficient utilization. We propose EmbedLens, an analytical framework that reveals significant semantic sparsity among visual tokens already at the input stage, categorizing them into sink, dead, and alive typesโ€”only about 60% of which are โ€œaliveโ€ tokens carrying image-specific information. Building on this insight, we introduce a mid-layer direct injection strategy that bypasses conventional shallow processing paradigms. Experiments demonstrate that most tasks do not require full internal visual computation, as rich semantics can be effectively encoded using only alive tokens; for vision-intensive tasks, mid-layer injection substantially improves both efficiency and performance, paving a new path toward efficient and interpretable MLLM design.

Technology Category

Application Category

๐Ÿ“ Abstract
Multimodal large language models (MLLMs) project visual tokens into the embedding space of language models, yet the internal structuring and processing of visual semantics remain poorly understood. In this work, we introduce a two-fold analytical framework featuring a novel probing tool, $\textbf{EmbedLens}$, to conduct a fine-grained analysis. We uncover a pronounced semantic sparsity at the input level: visual tokens consistently partition into sink, dead, and alive categories. Remarkably, only the alive tokens, comprising $\approx60\%$ of the total input, carry image-specific meaning. Furthermore, using a targeted patch-compression benchmark, we demonstrate that these alive tokens already encode rich, fine-grained cues (e.g., objects, colors, and OCR) prior to entering the LLM. Internal visual computations (such as visual attention and feed-forward networks) are redundant for most standard tasks. For the small subset of highly vision-centric tasks that actually benefit from internal processing, we reveal that alive tokens naturally align with intermediate LLM layers rather than the initial embedding space, indicating that shallow-layer processing is unnecessary and that direct mid-layer injection is both sufficient. Ultimately, our findings provide a unified mechanistic view of visual token processing, paving the way for more efficient and interpretable MLLM architectures through selective token pruning, minimized visual computation, and mid-layer injection. The code is released at: https://github.com/EIT-NLP/EmbedLens.
Problem

Research questions and friction points this paper is trying to address.

visual tokens
semantic sparsity
multimodal large language models
redundancy
visual semantics
Innovation

Methods, ideas, or system contributions that make the work stand out.

visual tokens
semantic sparsity
EmbedLens
mid-layer injection
multimodal large language models
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yingqi Fan
Institute of Digital Twin, Eastern Institute of Technology, Ningbo; Ningbo Key Laboratory of Spatial Intelligence and Digital Derivative
J
Junlong Tong
Institute of Digital Twin, Eastern Institute of Technology, Ningbo; Ningbo Key Laboratory of Spatial Intelligence and Digital Derivative; Shanghai Jiao Tong University
A
Anhao Zhao
Institute of Digital Twin, Eastern Institute of Technology, Ningbo; The Hong Kong Polytechnic University
Xiaoyu Shen
Xiaoyu Shen
Eastern Institute of Technology, Ningbo
language modelmulti-modal learningreasoning