AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning

📅 2024-12-04
🏛️ arXiv.org
📈 Citations: 10
Influential: 1
📄 PDF
🤖 AI Summary
To address the high inference overhead and poor adaptability to long-context and resource-constrained scenarios caused by visual token redundancy in multimodal large language models (MLLMs), this paper proposes a training-free, two-stage adaptive token compression method. First, visual tokens are iteratively merged based on embedding similarity; second, progressive pruning is performed guided by cross-layer multimodal importance scoring. Our approach unifies efficient compression for both image and video inputs and enables plug-and-play deployment without fine-tuning. Experiments on mainstream image and video benchmarks demonstrate up to 7× FLOPs reduction and a +4.6 gain in MLVU score on long-video understanding tasks, with negligible performance degradation. The core contributions include: (i) uncovering empirical patterns of multimodal token redundancy and LLM layer-wise behavior, and (ii) establishing the first training-free, staged, and multimodal-coordinated token compression paradigm.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have enabled the creation of multi-modal LLMs that exhibit strong comprehension of visual data such as images and videos. However, these models usually rely on extensive visual tokens from visual encoders, leading to high computational demands, which limits their applicability in resource-constrained environments and for long-context tasks. In this work, we propose a training-free adaptive inference method for multi-modal LLMs that can accommodate a broad range of efficiency requirements with a minimum performance drop. Our method consists of a) iterative token merging based on embedding similarity before LLMs, and b) progressive token pruning within LLM layers based on multi-modal importance. With a minimalist design, our method can be applied to both video and image LLMs. Extensive experiments on diverse video and image benchmarks demonstrate that, our method substantially reduces computation load (e.g., a $ extbf{7-fold}$ reduction in FLOPs) while preserving the performance of video and image LLMs. Further, under a similar computational cost, our method outperforms the state-of-the-art methods in long video understanding (e.g., $ extbf{+4.6}$ on MLVU). Additionally, our in-depth analysis provides insights into token redundancy and LLM layer behaviors, offering guidance for future research in designing efficient multi-modal LLMs. Our code will be available at https://github.com/LaVi-Lab/AIM.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational load in multi-modal LLMs
Improves efficiency for resource-constrained environments
Enhances long-context task performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free adaptive inference for multi-modal LLMs
Token merging based on embedding similarity
Progressive token pruning via multi-modal importance
🔎 Similar Papers
No similar papers found.