🤖 AI Summary
To address the low visual token compression efficiency in high-resolution multimodal large language models (MLLMs) and the difficulty of preserving both global semantics and fine-grained local details during inference, this paper proposes GlobalCom²—a training-free, globally guided compression method. Its core innovation is the novel “thumbnail-as-commander” mechanism, wherein a thumbnail guides adaptive token retention across multiple cropped regions, enabling dynamic, resolution-agnostic retention ratio allocation. GlobalCom² further integrates heuristic attention-based importance scoring, multi-scale token aggregation, and global-local collaborative compression—naturally compatible with AnyRes architectures and mainstream MLLMs such as LLaVA-NeXT. Evaluated on ten standard benchmarks, GlobalCom² achieves an average 2.1× inference speedup for LLaVA-NeXT-7B/13B while incurring less than 0.8% performance degradation. The implementation is publicly available.
📝 Abstract
Multimodal large language models (MLLMs) have attracted considerable attention due to their exceptional performance in visual content understanding and reasoning. However, their inference efficiency has been a notable concern, as the increasing length of multimodal contexts leads to quadratic complexity. Token compression techniques, which reduce the number of visual tokens, have demonstrated their effectiveness in reducing computational costs. Yet, these approaches have struggled to keep pace with the rapid advancements in MLLMs, especially the AnyRes strategy in the context of high-resolution image understanding. In this paper, we propose a novel token compression method, GlobalCom$^2$, tailored for high-resolution MLLMs that receive both the thumbnail and multiple crops. GlobalCom$^2$ treats the tokens derived from the thumbnail as the ``commander'' of the entire token compression process, directing the allocation of retention ratios and the specific compression for each crop. In this way, redundant tokens are eliminated while important local details are adaptively preserved to the highest extent feasible. Empirical results across 10 benchmarks reveal that GlobalCom$^2$ achieves an optimal balance between performance and efficiency, and consistently outperforms state-of-the-art token compression methods with LLaVA-NeXT-7B/13B models. Our code is released at url{https://github.com/xuyang-liu16/GlobalCom2}.