AdaTok: Adaptive Token Compression with Object-Aware Representations for Efficient Multimodal LLMs

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) rely on patch-level image tokenization, causing quadratic growth in token count with resolution—leading to excessive computational cost, high memory consumption, and visual hallucinations. Moreover, their raster-scan tokenization contradicts the human top-down object-centric perception paradigm. To address this, we propose the first object-aware adaptive token compression method for MLLMs: it detects semantically meaningful objects in images, dynamically merges redundant patch tokens, and integrates multi-scale feature fusion with semantic consistency alignment to achieve cognitively grounded, efficient compression. Evaluated across multiple benchmarks, our method restores 96% of full-token performance using only 10% of the original tokens—substantially outperforming existing token compression approaches. It simultaneously enhances inference efficiency and robustness, bridging the gap between computational scalability and perceptual fidelity in MLLMs.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have demonstrated substantial value in unified text-image understanding and reasoning, primarily by converting images into sequences of patch-level tokens that align with their architectural paradigm. However, patch-level tokenization leads to a quadratic growth in image tokens, burdening MLLMs' understanding and reasoning with enormous computation and memory. Additionally, the traditional patch-wise scanning tokenization workflow misaligns with the human vision cognition system, further leading to hallucination and computational redundancy. To address this issue, we propose an object-level token merging strategy for Adaptive Token compression, revealing the consistency with human vision system. The experiments are conducted on multiple comprehensive benchmarks, which show that our approach averagely, utilizes only 10% tokens while achieving almost 96% of the vanilla model's performance. More extensive experimental results in comparison with relevant works demonstrate the superiority of our method in balancing compression ratio and performance. Our code will be available.
Problem

Research questions and friction points this paper is trying to address.

Reduces quadratic growth of image tokens in MLLMs
Addresses misalignment with human vision cognition system
Mitigates computational redundancy and hallucination issues
Innovation

Methods, ideas, or system contributions that make the work stand out.

Object-level token merging strategy for compression
Adaptive token compression matching human vision
Achieves 96% performance with 10% tokens
🔎 Similar Papers
No similar papers found.
X
Xinliang Zhang
Institute of Medical Technology, Peking University Health Science Center
L
Lei Zhu
Department of Biomedical Engineering, Peking University
Hangzhou He
Hangzhou He
PhD student, Peking University
ExplainabilityMedical Image AnalysisTrustworthy AI
Shuang Zeng
Shuang Zeng
Peking University, Georgia Institute of Technology
Self-supervised Contrastive LearningMedical Image SegmentationSuperpixelLarge Language Model
O
Ourui Fu
Department of Biomedical Engineering, Peking University
J
Jiakui Hu
Institute of Medical Technology, Peking University Health Science Center
Zhengjian Yao
Zhengjian Yao
Peking University
computer visiongenerative model
Yanye Lu
Yanye Lu
Peking University
Medical Imaging/Deep Learning/Machine Learning