Beyond Token Compression: A Training-Free Reduction Framework for Efficient Visual Processing in MLLMs

📅 2025-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Decoder-based multimodal large language models (MLLMs) suffer from substantial computational redundancy in visual token processing. Method: We propose a training-free, lightweight architectural compression framework comprising: (1) Hollow Attention—a novel localized visual attention mechanism that enforces sparsity in attention computation; (2) Probe-Activated FFN—a probe-driven dynamic gating mechanism that selectively activates feed-forward network parameters; and (3) a greedy hierarchical search algorithm that automatically identifies and prunes approximately 50% of visually redundant layers. Contribution/Results: Our method is orthogonal to existing token-compression techniques. When applied to half of the visual transformer layers in state-of-the-art MLLMs, it maintains or even improves downstream task performance while significantly reducing inference memory footprint and FLOPs—establishing a new paradigm for efficient MLLM design.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) are typically based on decoder-only or cross-attention architectures. While decoder-only MLLMs outperform their cross-attention counterparts, they require significantly higher computational resources due to extensive self-attention and FFN operations on visual tokens. This raises the question: can we eliminate these expensive operations while maintaining the performance? To this end, we present a novel analysis framework to investigate the necessity of these costly operations in decoder-only MLLMs. Our framework introduces two key innovations: (1) Hollow Attention, which limits visual token interactions to local attention while maintaining visual-text associations, and (2) Probe-Activated Dynamic FFN, which selectively activates FFN parameters for visual tokens. Both methods do not require fine-tuning, which significantly enhances analysis efficiency. To assess the impact of applying these reductions across different proportions of layers, we developed a greedy search method that significantly narrows the search space. Experiments on state-of-the-art MLLMs reveal that applying our reductions to approximately half of the layers not only maintains but sometimes improves model performance, indicating significant computational redundancy in current architectures. Additionally, our method is orthogonal to existing token compression techniques, allowing for further combination to achieve greater computational reduction. Our findings may provide valuable insights for the design of more efficient future MLLMs. Our code will be publicly available at https://github.com/L-Hugh/Beyond-Token-Compression.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Large Language Models
Resource-intensive Operations
Performance Maintenance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hollow Attention
Probe-activated Dynamic FFN
Resource-efficient MLLMs
H
Hongliang Li
South China University of Technology
J
Jiaxin Zhang
South China University of Technology
W
Wenhui Liao
South China University of Technology
Dezhi Peng
Dezhi Peng
Huawei Technologies, South China University of Technology
Computer Vision
K
Kai Ding
Intsig Information Co., Ltd.
Lianwen Jin
Lianwen Jin
Professor of Electronic and Information Engineering, South China University of Technology
Optical Character Recognition (OCR)Computer VisionDocument AIMultimodal LLMs