Towards Understanding Best Practices for Quantization of Vision-Language Models

📅 2026-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates efficient quantization methods for multimodal vision-language models, aiming to substantially reduce memory footprint and inference latency while preserving performance on tasks such as image captioning, retrieval, and visual question answering. Through systematic evaluation of post-training quantization techniques—including GPTQ and AWQ—across the vision encoder (ViT), large language model (LLM), and alignment modules, the work conducts component-wise experiments under varying bit-width configurations. The findings reveal that both ViT and LLM contribute comparably to overall multimodal performance, and that the LLM can be effectively quantized to low bit-widths with minimal accuracy degradation. Results demonstrate that significant reductions in bits per weight (bpw) are achievable without compromising task accuracy, thereby confirming the feasibility of low-bit quantization in multimodal systems and offering practical guidance for real-world deployment.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) deliver impressive results for a variety of tasks, but state-of-the-art systems require fast GPUs with large amounts of memory. To reduce both the memory and latency of these systems, practitioners quantize their learned parameters, typically at half precision. A growing body of research focuses on preserving the model performance with more aggressive bit widths, and some work has been done to apply these strategies to other models, like vision transformers. In our study we investigate how a variety of quantization methods, including state-of-the-art GPTQ and AWQ, can be applied effectively to multimodal pipelines comprised of vision models, language models, and their connectors. We address how performance on captioning, retrieval, and question answering can be affected by bit width, quantization method, and which portion of the pipeline the quantization is used for. Results reveal that ViT and LLM exhibit comparable importance in model performance, despite significant differences in parameter size, and that lower-bit quantization of the LLM achieves high accuracy at reduced bits per weight (bpw). These findings provide practical insights for efficient deployment of MLLMs and highlight the value of exploration for understanding component sensitivities in multimodal models. Our code is available at https://github.com/gautomdas/mmq.
Problem

Research questions and friction points this paper is trying to address.

quantization
vision-language models
multimodal models
model compression
bit-width
Innovation

Methods, ideas, or system contributions that make the work stand out.

quantization
vision-language models
GPTQ
AWQ
multimodal LLMs
🔎 Similar Papers
2024-10-10Neural Information Processing SystemsCitations: 1
G
Gautom Das
University of Maryland, College Park
V
Vincent La
University of Maryland, College Park
E
Ethan Lau
University of Maryland, College Park
Abhinav Shrivastava
Abhinav Shrivastava
Associate Professor, University of Maryland, College Park
Computer VisionMachine LearningRobotics
M
M. Gwilliam
University of Maryland, College Park