Positional Preservation Embedding for Multimodal Large Language Models

📅 2025-10-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address visual token redundancy in multimodal large language models (MLLMs)—which degrades inference efficiency—and the structural distortion caused by existing compression methods that neglect spatiotemporal positional relationships, this paper proposes a position-aware visual token compression framework. Our method introduces: (1) a decoupled 3D positional encoding scheme that explicitly integrates spatial and temporal coordinates into compressed tokens; (2) a parameter-free, cascaded clustering strategy compatible with diverse token merging frameworks; and (3) position-preserving embedding (PPE) for structure-aware compression without model fine-tuning. Evaluated on MMBench, TextVQA, and VideoMME, our approach achieves consistent 2–5% performance gains under zero-parameter adjustment, demonstrating—for the first time—the critical role of explicit positional modeling in efficient MLLM inference.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) have achieved strong performance on vision-language tasks, yet often suffer from inefficiencies due to redundant visual tokens. Existing token merging methods reduce sequence length but frequently disrupt spatial layouts and temporal continuity by disregarding positional relationships. In this work, we propose a novel encoding operator dubbed as extbf{P}ositional extbf{P}reservation extbf{E}mbedding ( extbf{PPE}), which has the main hallmark of preservation of spatiotemporal structure during visual token compression. PPE explicitly introduces the disentangled encoding of 3D positions in the token dimension, enabling each compressed token to encapsulate different positions from multiple original tokens. Furthermore, we show that PPE can effectively support cascade clustering -- a progressive token compression strategy that leads to better performance retention. PPE is a parameter-free and generic operator that can be seamlessly integrated into existing token merging methods without any adjustments. Applied to state-of-the-art token merging framework, PPE achieves consistent improvements of $2%sim5%$ across multiple vision-language benchmarks, including MMBench (general vision understanding), TextVQA (layout understanding) and VideoMME (temporal understanding). These results demonstrate that preserving positional cues is critical for efficient and effective MLLM reasoning.
Problem

Research questions and friction points this paper is trying to address.

Reduces redundant visual tokens in multimodal language models efficiently
Preserves spatial layouts and temporal continuity during token compression
Enhances performance on vision-language tasks without adding parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Positional Preservation Embedding preserves spatiotemporal structure
Disentangled encoding enables compressed tokens to encapsulate positions
Parameter-free operator integrates seamlessly into existing token merging methods
🔎 Similar Papers
No similar papers found.
M
Mouxiao Huang
Huawei Noah’s Ark Lab
B
Borui Jiang
Huawei Noah’s Ark Lab
D
Dehua Zheng
Huawei Noah’s Ark Lab
Hailin Hu
Hailin Hu
Huawei Noah's Ark Lab
K
Kai Han
Huawei Noah’s Ark Lab
X
Xinghao Chen
Huawei Noah’s Ark Lab