TAMP: Token-Adaptive Layerwise Pruning in Multimodal Large Language Models

📅 2025-04-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing pruning methods for multimodal large language models (MLLMs) suffer from limited performance due to their failure to model the heterogeneity of tokens across modalities and layers. To address this, we propose TAMP, a hierarchical adaptive pruning framework. Our approach employs unstructured weight pruning and introduces two key innovations: (1) a diversity-aware sparsity allocation mechanism that dynamically adjusts pruning ratios based on modality- and layer-specific characteristics; and (2) an attention-score-based representative token identification and adaptive activation strategy that explicitly models both token importance and diversity. Evaluated on LLaVA-NeXT and VideoLLaMA2, TAMP consistently outperforms state-of-the-art pruning baselines. It achieves superior compression rates while maintaining or even exceeding SOTA accuracy across multiple multimodal benchmarks—including MMBench and VideoMME—thereby establishing the first systematic solution to joint modality-layer sparse optimization in MLLMs.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have shown remarkable versatility in understanding diverse multimodal data and tasks. However, these capabilities come with an increased model scale. While post-training pruning reduces model size in unimodal models, its application to MLLMs often yields limited success. Our analysis discovers that conventional methods fail to account for the unique token attributes across layers and modalities inherent to MLLMs. Inspired by this observation, we propose TAMP, a simple yet effective pruning framework tailored for MLLMs, featuring two key components: (1) Diversity-Aware Sparsity, which adjusts sparsity ratio per layer based on diversities among multimodal output tokens, preserving more parameters in high-diversity layers; and (2) Adaptive Multimodal Input Activation, which identifies representative multimodal input tokens using attention scores to guide unstructured weight pruning. We validate our method on two state-of-the-art MLLMs: LLaVA-NeXT, designed for vision-language tasks, and VideoLLaMA2, capable of processing audio, visual, and language modalities. Empirical experiments across various multimodal evaluation benchmarks demonstrate that each component of our approach substantially outperforms existing pruning techniques.
Problem

Research questions and friction points this paper is trying to address.

Pruning MLLMs while preserving multimodal performance
Addressing token diversity across layers and modalities
Adaptive pruning for vision-language and audio-visual models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token-adaptive layerwise pruning for MLLMs
Diversity-aware sparsity adjusts layer sparsity
Adaptive multimodal input activation guides pruning
🔎 Similar Papers
No similar papers found.
J
Jaewoo Lee
University of North Carolina Chapel Hill
Keyang Xuan
Keyang Xuan
University of Illinois at Urbana–Champaign
Natural Language ProcessingAgentAI4Science
C
C. Ekbote
Massachusetts Institute of Technology
Sandeep Polisetty
Sandeep Polisetty
Student, University of Massachusetts, Amherst
Systems for Machine Learning
Y
Yi R. Fung
Hong Kong University of Science and Technology
P
Paul Pu Liang
Massachusetts Institute of Technology