🤖 AI Summary
To address visual token redundancy in multimodal large language models (MLLMs)—which degrades inference efficiency—and the structural distortion caused by existing compression methods that neglect spatiotemporal positional relationships, this paper proposes a position-aware visual token compression framework. Our method introduces: (1) a decoupled 3D positional encoding scheme that explicitly integrates spatial and temporal coordinates into compressed tokens; (2) a parameter-free, cascaded clustering strategy compatible with diverse token merging frameworks; and (3) position-preserving embedding (PPE) for structure-aware compression without model fine-tuning. Evaluated on MMBench, TextVQA, and VideoMME, our approach achieves consistent 2–5% performance gains under zero-parameter adjustment, demonstrating—for the first time—the critical role of explicit positional modeling in efficient MLLM inference.
📝 Abstract
Multimodal large language models (MLLMs) have achieved strong performance on vision-language tasks, yet often suffer from inefficiencies due to redundant visual tokens. Existing token merging methods reduce sequence length but frequently disrupt spatial layouts and temporal continuity by disregarding positional relationships. In this work, we propose a novel encoding operator dubbed as extbf{P}ositional extbf{P}reservation extbf{E}mbedding ( extbf{PPE}), which has the main hallmark of preservation of spatiotemporal structure during visual token compression. PPE explicitly introduces the disentangled encoding of 3D positions in the token dimension, enabling each compressed token to encapsulate different positions from multiple original tokens. Furthermore, we show that PPE can effectively support cascade clustering -- a progressive token compression strategy that leads to better performance retention. PPE is a parameter-free and generic operator that can be seamlessly integrated into existing token merging methods without any adjustments. Applied to state-of-the-art token merging framework, PPE achieves consistent improvements of $2%sim5%$ across multiple vision-language benchmarks, including MMBench (general vision understanding), TextVQA (layout understanding) and VideoMME (temporal understanding). These results demonstrate that preserving positional cues is critical for efficient and effective MLLM reasoning.