🤖 AI Summary
Large Vision-Language Models (LVLMs) face significant deployment challenges due to their substantial computational overhead. To address this, we introduce EffiVLM-Bench—the first comprehensive, training-free acceleration benchmark for LVLMs—designed to systematically evaluate token- and parameter-level compression methods across diverse backbones (e.g., CLIP, Qwen-VL, LLaVA), multimodal tasks (VQA, image captioning, visual reasoning), and multiple evaluation metrics (accuracy, generalization, faithfulness). We propose a novel three-dimensional evaluation framework and formally define and characterize the Pareto-optimal trade-off frontier in LVLM acceleration for the first time. Leveraging token pruning, training-free quantization, knowledge distillation, and attention sparsification, our approach achieves a 2.1× inference speedup on VQA with only a 0.8% accuracy drop. All code, configurations, and baseline results are publicly released.
📝 Abstract
Large Vision-Language Models (LVLMs) have achieved remarkable success, yet their significant computational demands hinder practical deployment. While efforts to improve LVLM efficiency are growing, existing methods lack comprehensive evaluation across diverse backbones, benchmarks, and metrics. In this work, we systematically evaluate mainstream acceleration techniques for LVLMs, categorized into token and parameter compression. We introduce EffiVLM-Bench, a unified framework for assessing not only absolute performance but also generalization and loyalty, while exploring Pareto-optimal trade-offs. Our extensive experiments and in-depth analyses offer insights into optimal strategies for accelerating LVLMs. We open-source code and recipes for EffiVLM-Bench to foster future research.