The Effect of Compression Techniques on Large Multimodal Language Models in the Medical Domain

📅 2025-07-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address computational resource constraints in deploying medical multimodal large language models (MLLMs), this paper proposes an efficient compression pipeline for healthcare-adapted LLaVA models, comprising three stages: structured pruning, supervised fine-tuning (SFT), and activation-aware quantization. We introduce a novel layer-selection strategy for pruning, guided by inter-layer activation distributions, and tightly couple it with activation-aware quantization to preserve semantic sensitivity while enhancing compression fidelity. Experimental results demonstrate that our method reduces GPU memory consumption of the 7B-parameter model by 70%, enabling successful deployment on devices with only 4 GB VRAM. On multiple medical visual question answering and radiology report generation benchmarks, it achieves a 4% absolute accuracy improvement over conventional pruning-plus-quantization baselines, significantly advancing the efficiency–accuracy trade-off in resource-constrained clinical AI deployment.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) hold huge potential for usage in the medical domain, but their computational costs necessitate efficient compression techniques. This paper evaluates the impact of structural pruning and activation-aware quantization on a fine-tuned LLAVA model for medical applications. We propose a novel layer selection method for pruning, analyze different quantization techniques, and assess the performance trade-offs in a prune-SFT-quantize pipeline. Our proposed method enables MLLMs with 7B parameters to run within 4 GB of VRAM, reducing memory usage by 70% while achieving 4% higher model performance compared to traditional pruning and quantization techniques in the same compression ratio.
Problem

Research questions and friction points this paper is trying to address.

Evaluates compression impact on medical multimodal models
Proposes layer selection method for efficient pruning
Reduces VRAM usage while improving model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel layer selection method for pruning
Activation-aware quantization techniques analysis
Prune-SFT-quantize pipeline for performance trade-offs
🔎 Similar Papers
No similar papers found.
Tanvir Ahmed Khan
Tanvir Ahmed Khan
Columbia University
Computer ArchitectureSoftware SystemsProgramming Languages
A
Aranya Saha
Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka, Bangladesh.
I
Ismam Nur Swapnil
Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka, Bangladesh.
Mohammad Ariful Haque
Mohammad Ariful Haque
Professor of Bangladesh University of Engineering and Technology
Signal processingdeep learning