CAPA: Contribution-Aware Pruning and FFN Approximation for Efficient Large Vision-Language Models

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency of large vision-language models during inference, which stems from the high computational cost of processing visual tokens and the absence of reliable criteria for identifying redundancy. To this end, the authors propose the CAPA framework, which introduces attention contribution—defined by integrating attention probabilities with value vector magnitudes—as a novel metric to quantify visual token importance and prune low-contribution tokens. Additionally, CAPA applies linear approximation to the feed-forward networks associated with visual tokens in intermediate layers to eliminate redundant computation. This approach effectively distinguishes functionally heterogeneous attention aggregation points and reveals the linear-behavior redundancy of visual tokens within feed-forward networks. Experiments demonstrate that CAPA significantly improves inference efficiency across multiple benchmarks while preserving strong performance and robustness.

Technology Category

Application Category

📝 Abstract
Efficient inference in Large Vision-Language Models is constrained by the high cost of processing thousands of visual tokens, yet it remains unclear which tokens and computations can be safely removed. While attention scores are commonly used to estimate visual token importance, they are an imperfect proxy for actual contribution. We show that Attention Contribution, which weights attention probabilities by value vector magnitude, provides a more accurate criterion for visual token selection. Our empirical analysis reveals that visual attention sinks are functionally heterogeneous, comprising Probability Dumps with low contribution that can be safely pruned, and Structural Anchors with high contribution essential for maintaining model performance. Further, we identify substantial redundancy in Feed-Forward Networks (FFNs) associated with visual tokens, particularly in intermediate layers where image tokens exhibit linear behavior. Based on our findings, we introduce CAPA (Contribution-Aware Pruning and FFN Approximation), a dual-strategy framework that prunes visual tokens using attention contribution at critical functional transitions and reduces FFN computation through efficient linear approximations. Experiments on various benchmarks across baselines show that CAPA achieves competent efficiency--performance trade-offs with improved robustness.
Problem

Research questions and friction points this paper is trying to address.

visual tokens
efficient inference
Large Vision-Language Models
computational redundancy
token pruning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attention Contribution
Visual Token Pruning
FFN Approximation
Linear Redundancy
Vision-Language Models
🔎 Similar Papers
No similar papers found.