RCP: Representation Consistency Pruner for Mitigating Distribution Shift in Large Vision-Language Models

📅 2026-04-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost of large vision-language models (LVLMs) caused by processing massive numbers of visual tokens. Existing token pruning methods suffer from irreversible removal that induces significant representational distribution shifts, degrading model performance. To overcome this, the authors propose the RCP framework, which integrates cumulative visual token pruning with a FiLM-based late-stage repair adapter. RCP consistently and monotonically reduces tokens across multiple layers while caching pruned information for modulation during answer generation. A dedicated repair loss aligns the first- and second-order statistics of the pruned model with those of the full-token counterpart, effectively mitigating distributional shift. Experiments show that RCP removes up to 88.9% of visual tokens, reduces FLOPs by 85.7%, and incurs only minor accuracy degradation, substantially outperforming existing no-finetuning pruning approaches across multiple LVLM benchmarks.
📝 Abstract
Large Vision-Language Models (LVLMs) suffer from prohibitive inference costs due to the massive number of visual tokens processed by the language decoder. Existing pruning methods often lead to significant performance degradation because the irreversible removal of visual tokens causes a distribution shift in the hidden states that deviates from the pre-trained full-token regime. To address this, we propose Representation Consistency Pruner, which we refer to as RCP, as a novel framework that integrates cumulative visual token pruning with a delayed repair mechanism. Specifically, we introduce a cross-attention pruner that leverages the intrinsic attention of the LLM as a baseline to predict cumulative masks, ensuring consistent and monotonic token reduction across layers. To compensate for the resulting information loss, we design a delayed repair adapter denoted as DRA, which caches the essence of pruned tokens and applies FiLM-based modulation specifically to the answer generation tokens. We employ a repair loss to match the first and second-order statistics of the pruned representations with a full-token teacher. RCP is highly efficient because it trains only lightweight plug-in modules while allowing for physical token discarding at inference. Extensive experiments on LVLM benchmarks demonstrate that RCP removes up to 88.9\% of visual tokens and reduces FLOPs by up to 85.7\% with only a marginal average accuracy drop, and outperforms prior methods that avoid fine-tuning the original model on several widely used benchmarks.
Problem

Research questions and friction points this paper is trying to address.

distribution shift
visual token pruning
large vision-language models
inference cost
representation consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Representation Consistency
Visual Token Pruning
Distribution Shift Mitigation
Delayed Repair Adapter
FiLM-based Modulation
🔎 Similar Papers
No similar papers found.
Jianwei Zhang
Jianwei Zhang
Professor, School of Education, University at Albany, SUNY
CSCLlearning sciencestechnology for creativityknowledge buildinginquiry-based learning
Chaoning Zhang
Chaoning Zhang
Professor at UESTC (电子科技大学, China)
Computer VisionLLM and VLMGenAI and AIGC Detection
S
Sihan Cao
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Qingshuihe Campus, No. 2006 Xiyuan Avenue, High-Tech Zone (West), Chengdu 611731, China
W
Wang Liu
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Qingshuihe Campus, No. 2006 Xiyuan Avenue, High-Tech Zone (West), Chengdu 611731, China
P
Pengcheng Zheng
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Qingshuihe Campus, No. 2006 Xiyuan Avenue, High-Tech Zone (West), Chengdu 611731, China
Jiaxin Huang
Jiaxin Huang
MBUZAI
Machine LearningMedical Image Analysis3D Vision
C
Caiyan Qin
School of Robotics and Advanced Manufacture, Harbin Institute of Technology, Shenzhen, University Town of Shenzhen, Nanshan District, Shenzhen 518055, China
Y
Yalan Ye
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Qingshuihe Campus, No. 2006 Xiyuan Avenue, High-Tech Zone (West), Chengdu 611731, China
Wei Dong
Wei Dong
PHD candidate, School of Computer Science and Engineering, Northwestern Polytechnical University,
Deep Learning
Y
Yang Yang
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Qingshuihe Campus, No. 2006 Xiyuan Avenue, High-Tech Zone (West), Chengdu 611731, China