Collaborative Multi-Mode Pruning for Vision-Language Models

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language model compression methods typically rely on a single pruning strategy—either parameter- or token-level—and struggle to maintain performance under high compression ratios. This work proposes CoMP, a collaborative multimodal pruning framework that jointly optimizes both parameter and token pruning for the first time. Its key innovations include a Collaborative Importance Metric (CIM) that explicitly models the interdependencies between parameters and tokens, and a Multi-stage Pruning Strategy (MPS) that integrates historical cost information with stochastic exploration to escape local optima. Extensive experiments demonstrate that CoMP consistently outperforms state-of-the-art methods across diverse vision-language tasks and architectures, achieving superior performance even at high compression rates.
📝 Abstract
Vision-Language Models (VLMs) have advanced rapidly within the unified Transformer architecture, yet their deployment on resource-constrained devices remains challenging due to high computational complexity. While pruning has emerged as an effective technique for compressing VLMs, existing approaches predominantly focus on a single mode by pruning either parameters or tokens, neglecting fully exploring the inherent redundancy in each mode, which leads to substantial performance degradation at high pruning ratios. To address the above limitations, we propose Collaborative Multi-Mode Pruning (CoMP), a novel framework tailored for VLMs by performing joint parameter and token pruning. Specifically, we first design a Collaborative Importance Metric (CIM) that investigates the mutual interference between the coupled parameters and tokens. It incorporates distinct significance of tokens into the computation of parameter importance scores, while simultaneously mitigating the affect of pruned parameters on token importance scores. Moreover, we develop a Multi-Mode Pruning Strategy (MPS) that decomposes the overall pruning process into a sequence of pruning stages, while in each stage we estimate the priory of different pruning modes based on their pruning cost and adaptively shift to the optimal one. Additionally, MPS integrates the historical cost and random exploration, in order to achieve a stable pruning process and avoid local optimum. Extensive experiments across various vision-language tasks and models demonstrate that our method effectively promotes the performance under high pruning ratios by comparing to the state-of-the-art approaches. The source code is available at https://github.com/Wuzimeng/CoMP.git.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language Models
Model Pruning
Parameter Pruning
Token Pruning
Computational Complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Collaborative Multi-Mode Pruning
Vision-Language Models
Model Compression
Pruning Strategy
Importance Metric
🔎 Similar Papers
2024-08-29arXiv.orgCitations: 7
Z
Zimeng Wu
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China; School of Computer Science and Engineering, Beihang University, Beijing, China
Yunhong Wang
Yunhong Wang
Professor, School of Computer Science and Engineering, Beihang University
BiometricsPattern RecognitionImage ProcessingComputer Vision
D
Donghao Wang
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China; School of Computer Science and Engineering, Beihang University, Beijing, China
J
Jiaxin Chen
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China; School of Computer Science and Engineering, Beihang University, Beijing, China