VEQ: Modality-Adaptive Quantization for MoE Vision-Language Models

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing post-training quantization (PTQ) methods struggle to balance compression efficiency and accuracy in Mixture-of-Experts (MoE) vision-language models due to modality discrepancies and expert heterogeneity. This work proposes VEQ, a novel framework that jointly models cross-modal differences and expert heterogeneity for the first time. VEQ introduces a dual quantization strategy that is both modality-expert-aware and modality-affinity-aware, complemented by expert activation frequency weighting and an enhanced Hessian matrix incorporating fused multimodal information for calibration. Under the W3A16 configuration, VEQ achieves average accuracy improvements of 2.04% and 3.09% on Kimi-VL and Qwen3-VL, respectively, significantly outperforming current state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Mixture-of-Experts(MoE) Vision-Language Models (VLMs) offer remarkable performance but incur prohibitive memory and computational costs, making compression essential. Post-Training Quantization (PTQ) is an effective training-free technique to address the massive memory and computation overhead. Existing quantization paradigms fall short as they are oblivious to two critical forms of heterogeneity: the inherent discrepancy between vision and language tokens, and the non-uniform contribution of different experts. To bridge this gap, we propose Visual Expert Quantization (VEQ), a dual-aware quantization framework designed to simultaneously accommodate cross-modal differences and heterogeneity between experts. Specifically, VEQ incorporates 1)Modality-expert-aware Quantization, which utilizes expert activation frequency to prioritize error minimization for pivotal experts, and 2)Modality-affinity-aware Quantization, which constructs an enhanced Hessian matrix by integrating token-expert affinity with modality information to guide the calibration process. Extensive experiments across diverse benchmarks verify that VEQ consistently outperforms state-of-the-art baselines. Specifically, under the W3A16 configuration, our method achieves significant average accuracy gains of 2.04\% on Kimi-VL and 3.09\% on Qwen3-VL compared to the previous SOTA quantization methods, demonstrating superior robustness across various multimodal tasks. Our code will be available at https://github.com/guangshuoqin/VEQ.
Problem

Research questions and friction points this paper is trying to address.

Mixture-of-Experts
Vision-Language Models
Post-Training Quantization
Modality Heterogeneity
Expert Heterogeneity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modality-Adaptive Quantization
Mixture-of-Experts
Post-Training Quantization
Vision-Language Models
Heterogeneity-Aware Calibration
🔎 Similar Papers
No similar papers found.