🤖 AI Summary
Multimodal large language models (MLLMs) rely on patch-level image tokenization, causing quadratic growth in token count with resolution—leading to excessive computational cost, high memory consumption, and visual hallucinations. Moreover, their raster-scan tokenization contradicts the human top-down object-centric perception paradigm. To address this, we propose the first object-aware adaptive token compression method for MLLMs: it detects semantically meaningful objects in images, dynamically merges redundant patch tokens, and integrates multi-scale feature fusion with semantic consistency alignment to achieve cognitively grounded, efficient compression. Evaluated across multiple benchmarks, our method restores 96% of full-token performance using only 10% of the original tokens—substantially outperforming existing token compression approaches. It simultaneously enhances inference efficiency and robustness, bridging the gap between computational scalability and perceptual fidelity in MLLMs.
📝 Abstract
Multimodal Large Language Models (MLLMs) have demonstrated substantial value in unified text-image understanding and reasoning, primarily by converting images into sequences of patch-level tokens that align with their architectural paradigm. However, patch-level tokenization leads to a quadratic growth in image tokens, burdening MLLMs' understanding and reasoning with enormous computation and memory. Additionally, the traditional patch-wise scanning tokenization workflow misaligns with the human vision cognition system, further leading to hallucination and computational redundancy. To address this issue, we propose an object-level token merging strategy for Adaptive Token compression, revealing the consistency with human vision system. The experiments are conducted on multiple comprehensive benchmarks, which show that our approach averagely, utilizes only 10% tokens while achieving almost 96% of the vanilla model's performance. More extensive experimental results in comparison with relevant works demonstrate the superiority of our method in balancing compression ratio and performance. Our code will be available.