🤖 AI Summary
Deploying large vision-language models (VLMs) on resource-constrained devices faces challenges of high inference overhead and pruning methods that rely on original training data. This paper proposes a data-free, automated pruning framework for VLMs. Methodologically, it introduces (1) a novel generalization gap modeling approach grounded in structural risk minimization, enabling efficient, universal pruning strategy search with only 64 calibration samples; and (2) dynamic co-evolution of the visual projector to elevate the performance upper bound of the pruned architecture. Evaluated on ScienceQA, the pruned model achieves 83.05% accuracy—outperforming LLaVA-v1.5-7B by 1.8× speedup—while maintaining strong generalization across VizWiz, MM-Vet, and LLaVA-Bench. The framework eliminates dependency on original training corpora, enables sample-efficient adaptation, and preserves multimodal reasoning capability under significant parameter reduction.
📝 Abstract
While multimodal large language models demonstrate strong performance in complex reasoning tasks, they pose significant challenges related to model complexity during deployment, especially for resource-limited devices. In this paper, we propose an automatic pruning method for large vision-language models to enhance the efficiency of multimodal reasoning. Conventional methods rely on the training data of the original model to select the proper pruning ratio for different network components. However, these methods are impractical for large vision-language models due to the unaffordable search costs caused by web-scale training corpus. In contrast, our approach only leverages a small number of samples to search for the desired pruning policy by maximizing its generalization ability on unknown training data while maintaining the model accuracy, which enables the achievement of an optimal trade-off between accuracy and efficiency for large visual language models. Specifically, we formulate the generalization gap of the pruning strategy using the structural risk minimization principle. Based on both task performance and generalization capability, we iteratively search for the optimal pruning policy within a given search space and optimize the vision projector to evolve the search space with higher upper bound of performance. We conduct extensive experiments on the ScienceQA, Vizwiz, MM-vet, and LLaVA-Bench datasets for the task of visual question answering. Using only 64 samples for pruning policy search, EfficientLLaVA achieves an accuracy of 83.05% on ScienceQA, along with a $ imes$ 1.8 speedup compared to the dense LLaVA-v1.5-7B model.