Mostly Text, Smart Visuals: Asymmetric Text-Visual Pruning for Large Vision-Language Models

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing pruning methods for large vision-language models fail to account for the differing sensitivity and redundancy between textual and visual modalities, thereby limiting their effectiveness. This work proposes Asymmetric Text-Vision Pruning (ATV-Pruning), which calibrates the two modalities separately: leveraging the high sensitivity of the text pathway by retaining all tokens, while exploiting the high redundancy of the vision pathway through a hierarchical token selection strategy. The method introduces an adaptive calibration pool and layer-adaptive importance metrics to enable modality-aware, precise pruning. Experiments demonstrate that ATV-Pruning significantly outperforms current approaches across multiple multimodal benchmarks, achieving up to 50% sparsity in the vision pathway while maintaining competitive performance.

Technology Category

Application Category

📝 Abstract
Network pruning is an effective technique for enabling lightweight Large Vision-Language Models (LVLMs), which primarily incorporates both weights and activations into the importance metric. However, existing efforts typically process calibration data from different modalities in a unified manner, overlooking modality-specific behaviors. This raises a critical challenge: how to address the divergent behaviors of textual and visual tokens for accurate pruning of LVLMs. To this end, we systematically investigate the sensitivity of visual and textual tokens to the pruning operation by decoupling their corresponding weights, revealing that: (i) the textual pathway should be calibrated via text tokens, since it exhibits higher sensitivity than the visual pathway; (ii) the visual pathway exhibits high redundancy, permitting even 50% sparsity. Motivated by these insights, we propose a simple yet effective Asymmetric Text-Visual Weight Pruning method for LVLMs, dubbed ATV-Pruning, which establishes the importance metric for accurate weight pruning by selecting the informative tokens from both textual and visual pathways. Specifically, ATV-Pruning integrates two primary innovations: first, a calibration pool is adaptively constructed by drawing on all textual tokens and a subset of visual tokens; second, we devise a layer-adaptive selection strategy to yield important visual tokens. Finally, extensive experiments across standard multimodal benchmarks verify the superiority of our ATV-Pruning over state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Large Vision-Language Models
Network Pruning
Modality-specific Behavior
Text-Visual Tokens
Model Compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

asymmetric pruning
vision-language models
modality-specific sensitivity
weight pruning
token selection