Pruning All-Rounder: Rethinking and Improving Inference Efficiency for Large Vision Language Models

📅 2024-12-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high inference overhead of Large Vision-Language Models (LVLMs) and the limitations of existing parameter-dependent (requiring retraining) and token-dependent (unstable selection) pruning methods—which struggle to balance efficiency and performance—this paper proposes PAR, a training-free, self-supervised dynamic pruning framework. Its core innovations include: (1) a meta-router that unifies token-level and layer-level hybrid-granularity pruning, enabling adaptive sparsification across both tokens and layers; and (2) a lightweight, plug-and-play architecture with a self-supervised pruning strategy, eliminating the need for fine-tuning while preserving the performance-efficiency trade-off. Evaluated on multiple LVLM benchmarks, PAR achieves up to 2.3× inference speedup and substantial memory reduction, while retaining ≥98% of original task performance. The code will be publicly released.

Technology Category

Application Category

📝 Abstract
Although Large Vision-Language Models (LVLMs) have achieved impressive results, their high computational cost poses a significant barrier to wider application. To enhance inference efficiency, most existing approaches depend on parameter-dependent or token-dependent strategies to reduce computational demands. However, these methods typically require complex training processes and struggle to consistently select the most relevant tokens. In this paper, we systematically analyze the above challenges and provide a series of valuable insights for inference acceleration. Based on these findings, we propose a novel framework, the Pruning All-Rounder (PAR). Different from previous works, PAR develops a meta-router to adaptively organize pruning flows across both tokens and layers. With a self-supervised learning manner, our method achieves a superior balance between performance and efficiency. Notably, PAR is highly flexible, offering multiple pruning versions to address a range of pruning scenarios. The code for this work will be made publicly available.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational costs in Large Vision-Language Models
Improving token and layer pruning efficiency adaptively
Balancing performance and inference speed without retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-router organizes pruning flows adaptively
Self-supervised learning balances performance efficiency
Multiple pruning versions for various scenarios
🔎 Similar Papers
No similar papers found.
W
Wei Suo
School of Computer Science and Ningbo Institute, Northwestern Polytechnical University
J
Ji Ma
School of Computer Science and Ningbo Institute, Northwestern Polytechnical University
Mengyang Sun
Mengyang Sun
Northwestern Polytechnical University
computer vision、 vision-language interaction
Lin Yuanbo Wu
Lin Yuanbo Wu
Swansea University
Computer VisionAI GenerationTrustworthy AIAutonomous SystemEmbodied Visual Intelligence
P
Peng Wang
School of Computer Science and Ningbo Institute, Northwestern Polytechnical University
Yanning Zhang
Yanning Zhang
Northwestern Polytechnical University
Computer Vision