🤖 AI Summary
This work addresses the high computational cost of large vision-language models (LVLMs) caused by processing massive numbers of visual tokens. Existing token pruning methods suffer from irreversible removal that induces significant representational distribution shifts, degrading model performance. To overcome this, the authors propose the RCP framework, which integrates cumulative visual token pruning with a FiLM-based late-stage repair adapter. RCP consistently and monotonically reduces tokens across multiple layers while caching pruned information for modulation during answer generation. A dedicated repair loss aligns the first- and second-order statistics of the pruned model with those of the full-token counterpart, effectively mitigating distributional shift. Experiments show that RCP removes up to 88.9% of visual tokens, reduces FLOPs by 85.7%, and incurs only minor accuracy degradation, substantially outperforming existing no-finetuning pruning approaches across multiple LVLM benchmarks.
📝 Abstract
Large Vision-Language Models (LVLMs) suffer from prohibitive inference costs due to the massive number of visual tokens processed by the language decoder. Existing pruning methods often lead to significant performance degradation because the irreversible removal of visual tokens causes a distribution shift in the hidden states that deviates from the pre-trained full-token regime. To address this, we propose Representation Consistency Pruner, which we refer to as RCP, as a novel framework that integrates cumulative visual token pruning with a delayed repair mechanism. Specifically, we introduce a cross-attention pruner that leverages the intrinsic attention of the LLM as a baseline to predict cumulative masks, ensuring consistent and monotonic token reduction across layers. To compensate for the resulting information loss, we design a delayed repair adapter denoted as DRA, which caches the essence of pruned tokens and applies FiLM-based modulation specifically to the answer generation tokens. We employ a repair loss to match the first and second-order statistics of the pruned representations with a full-token teacher. RCP is highly efficient because it trains only lightweight plug-in modules while allowing for physical token discarding at inference. Extensive experiments on LVLM benchmarks demonstrate that RCP removes up to 88.9\% of visual tokens and reduces FLOPs by up to 85.7\% with only a marginal average accuracy drop, and outperforms prior methods that avoid fine-tuning the original model on several widely used benchmarks.