🤖 AI Summary
Decoder-based multimodal large language models (MLLMs) suffer from substantial computational redundancy in visual token processing. Method: We propose a training-free, lightweight architectural compression framework comprising: (1) Hollow Attention—a novel localized visual attention mechanism that enforces sparsity in attention computation; (2) Probe-Activated FFN—a probe-driven dynamic gating mechanism that selectively activates feed-forward network parameters; and (3) a greedy hierarchical search algorithm that automatically identifies and prunes approximately 50% of visually redundant layers. Contribution/Results: Our method is orthogonal to existing token-compression techniques. When applied to half of the visual transformer layers in state-of-the-art MLLMs, it maintains or even improves downstream task performance while significantly reducing inference memory footprint and FLOPs—establishing a new paradigm for efficient MLLM design.
📝 Abstract
Multimodal Large Language Models (MLLMs) are typically based on decoder-only or cross-attention architectures. While decoder-only MLLMs outperform their cross-attention counterparts, they require significantly higher computational resources due to extensive self-attention and FFN operations on visual tokens. This raises the question: can we eliminate these expensive operations while maintaining the performance? To this end, we present a novel analysis framework to investigate the necessity of these costly operations in decoder-only MLLMs. Our framework introduces two key innovations: (1) Hollow Attention, which limits visual token interactions to local attention while maintaining visual-text associations, and (2) Probe-Activated Dynamic FFN, which selectively activates FFN parameters for visual tokens. Both methods do not require fine-tuning, which significantly enhances analysis efficiency. To assess the impact of applying these reductions across different proportions of layers, we developed a greedy search method that significantly narrows the search space. Experiments on state-of-the-art MLLMs reveal that applying our reductions to approximately half of the layers not only maintains but sometimes improves model performance, indicating significant computational redundancy in current architectures. Additionally, our method is orthogonal to existing token compression techniques, allowing for further combination to achieve greater computational reduction. Our findings may provide valuable insights for the design of more efficient future MLLMs. Our code will be publicly available at https://github.com/L-Hugh/Beyond-Token-Compression.