BcQLM: Efficient Vision-Language Understanding with Distilled Q-Gated Cross-Modal Fusion

📅 2025-09-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of low energy efficiency, high computational overhead, and poor environmental sustainability in deploying multimodal large language models (MLLMs) under resource-constrained settings, this paper proposes BreezeCLIP—an end-to-end lightweight vision-language understanding framework. Methodologically, it introduces: (1) a Q-gated cross-modal fusion mechanism enabling dynamic, sparse vision–language interaction; (2) a compact CLIP encoder architecture optimized for knowledge distillation; and (3) holistic lightweighting strategies across the entire pipeline, yielding a highly efficient model with only 1.2 billion parameters. Evaluated on multiple visual question answering benchmarks, BreezeCLIP matches the performance of mainstream large models while reducing inference latency by 57% and GPU memory consumption by 63%. These improvements substantially enhance deployability on edge devices and strengthen multi-task generalization capability.

Technology Category

Application Category

📝 Abstract
As multimodal large language models (MLLMs) advance, their large-scale architectures pose challenges for deployment in resource-constrained environments. In the age of large models, where energy efficiency, computational scalability and environmental sustainability are paramount, the development of lightweight and high-performance models is critical for real-world applications. As such, we propose a lightweight MLLM framework for end-to-end visual question answering. Our proposed approach centres on BreezeCLIP, a compact yet powerful vision-language encoder optimised for efficient multimodal understanding. With only 1.2 billion parameters overall, our model significantly reduces computational cost while achieving performance comparable to standard-size MLLMs. Experiments conducted on multiple datasets further validate its effectiveness in balancing accuracy and efficiency. The modular and extensible design enables generalisation to broader multimodal tasks. The proposed lightweight vision-language framework is denoted as BcQLM (BreezeCLIP-enhanced Q-Gated Multimodal Language Model). It offers a promising path toward deployable MLLMs under practical hardware constraints. The source code is available at https://github.com/thico0224/BcQLM.
Problem

Research questions and friction points this paper is trying to address.

Develop lightweight vision-language model for resource-constrained environments
Reduce computational costs while maintaining multimodal understanding performance
Enable efficient visual question answering with minimal parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distilled Q-gated cross-modal fusion mechanism
Compact BreezeCLIP vision-language encoder design
Lightweight 1.2B parameter multimodal architecture
S
Sike Xiang
Department of Computer Science, Durham University
S
Shuang Chen
Department of Computer Science, Durham University
Amir Atapour-Abarghouei
Amir Atapour-Abarghouei
Department of Computer Science, Durham University
Machine LearningDeep LearningComputer VisionImage ProcessingNatural Language Processing