๐ค AI Summary
To address the challenge of high memory overhead and compromised accuracy in deploying Mixture-of-Experts (MoE) Vision-Language Models (VLMs), this paper proposes a post-training mixed-precision quantization method. The core innovation lies in (i) the first use of Hessian trace approximation to estimate expert-wise sensitivity, enabling fine-grained bit-width allocation per expert; and (ii) dynamic precision configuration via expert clustering and activation frequency analysisโboth without fine-tuning. Evaluated on multiple state-of-the-art MoE VLMs, the method significantly reduces memory footprint compared to uniform quantization baselines while preserving near-SOTA performance on the VLMEvalKit benchmark, with an average accuracy drop of less than 0.8%. This demonstrates that expert-level adaptive quantization achieves an effective trade-off between efficiency and fidelity in MoE VLM deployment.
๐ Abstract
Large Language and Vision Models using a Mixture-of-Experts (MoE) architecture pose significant challenges for deployment due to their computational and memory demands. Mixed Precision Quantization assigns different precisions to different layers of an LLM/VLM based on layer sensitivity and importance within the model. In this work, we propose a Post Training Quantization algorithm, MoPEQ, that assigns optimal bit width to each expert. Our method balances accuracy and model size by analyzing each expert's sensitivity using Hessian trace approximation instead of relying on the activation frequency of the expert. This per-expert granularity approach clusters similar experts to maintain model performance while reducing memory requirements. The experimental results on VLMEvalKit benchmark datasets using State-of-the-art VLMs Deepseek-VL2 -tiny, -small, -base, and MolmoE models demonstrate that our mixed precision quantized MoEs achieve competitive accuracy with substantial improvements in memory footprint compared to uniform-precision baseline methods. We perform a comprehensive study to analyze the impact of expert activation frequency and sensitivity using Hessian trace approximation at both layer-wise and model-wide expert precision allocation of 2, 3, and 4 bits to provide a thorough understanding of mixed precision quantization of VLM-MoEs.