🤖 AI Summary
This study systematically investigates the impact of KV cache and weight compression on multimodal generation performance in Large Vision-Language Models (LVLMs). Addressing the lack of comprehensive evaluation, we introduce the first multidimensional benchmark for LVLM compression effects, covering generation quality, ethical attributes (bias, hallucination, toxicity, visual illusion detection), and cross-modal capabilities (recognition, reasoning, spatial awareness). We propose a novel dual-path compression impact modeling framework that integrates real and synthetic data to enable cross-sociodemographic attribute analysis. Evaluating ten diverse datasets and metrics across four LLaVA variants, we apply uniform quantization, outlier-suppressing quantization, and grouped quantization. Our analysis reveals systematic trade-offs between quantization budget and performance degradation: certain compression schemes significantly exacerbate hallucination and bias, while specific lightweight configurations outperform the FP16 baseline on select tasks. The code is publicly released.
📝 Abstract
Despite recent efforts in understanding the compression impact on large language models (LLMs) in terms of their downstream task performance and trustworthiness on relatively simpler uni-modal benchmarks (for example, question answering, common sense reasoning), their detailed study on multi-modal Large Vision-Language Models (LVLMs) is yet to be unveiled. Towards mitigating this gap, we present LVLM-Compress-Bench, a framework to first thoroughly study the broad impact of compression on the generative performance of LVLMs with multi-modal input driven tasks. In specific, we consider two major classes of compression for autoregressive models, namely KV cache and weight compression, for the dynamically growing intermediate cache and static weights, respectively. We use four LVLM variants of the popular LLaVA framework to present our analysis via integrating various state-of-the-art KV and weight compression methods including uniform, outlier-reduced, and group quantization for the KV cache and weights. With this framework we demonstrate on ten different multi-modal datasets with different capabilities including recognition, knowledge, language generation, spatial awareness, visual reasoning, hallucination and visual illusion identification, toxicity, stereotypes and bias. In specific, our framework demonstrates the compression impact on both general and ethically critical metrics leveraging a combination of real world and synthetic datasets to encompass diverse societal intersectional attributes. Extensive experimental evaluations yield diverse and intriguing observations on the behavior of LVLMs at different quantization budget of KV and weights, in both maintaining and losing performance as compared to the baseline model with FP16 data format. Code will be open-sourced at https://github.com/opengear-project/LVLM-compress-bench.