🤖 AI Summary
To address the high inference overhead of Large Vision-Language Models (LVLMs) and the limitations of existing parameter-dependent (requiring retraining) and token-dependent (unstable selection) pruning methods—which struggle to balance efficiency and performance—this paper proposes PAR, a training-free, self-supervised dynamic pruning framework. Its core innovations include: (1) a meta-router that unifies token-level and layer-level hybrid-granularity pruning, enabling adaptive sparsification across both tokens and layers; and (2) a lightweight, plug-and-play architecture with a self-supervised pruning strategy, eliminating the need for fine-tuning while preserving the performance-efficiency trade-off. Evaluated on multiple LVLM benchmarks, PAR achieves up to 2.3× inference speedup and substantial memory reduction, while retaining ≥98% of original task performance. The code will be publicly released.
📝 Abstract
Although Large Vision-Language Models (LVLMs) have achieved impressive results, their high computational cost poses a significant barrier to wider application. To enhance inference efficiency, most existing approaches depend on parameter-dependent or token-dependent strategies to reduce computational demands. However, these methods typically require complex training processes and struggle to consistently select the most relevant tokens. In this paper, we systematically analyze the above challenges and provide a series of valuable insights for inference acceleration. Based on these findings, we propose a novel framework, the Pruning All-Rounder (PAR). Different from previous works, PAR develops a meta-router to adaptively organize pruning flows across both tokens and layers. With a self-supervised learning manner, our method achieves a superior balance between performance and efficiency. Notably, PAR is highly flexible, offering multiple pruning versions to address a range of pruning scenarios. The code for this work will be made publicly available.