MoPEQ: Mixture of Mixed Precision Quantized Experts

๐Ÿ“… 2025-09-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the challenge of high memory overhead and compromised accuracy in deploying Mixture-of-Experts (MoE) Vision-Language Models (VLMs), this paper proposes a post-training mixed-precision quantization method. The core innovation lies in (i) the first use of Hessian trace approximation to estimate expert-wise sensitivity, enabling fine-grained bit-width allocation per expert; and (ii) dynamic precision configuration via expert clustering and activation frequency analysisโ€”both without fine-tuning. Evaluated on multiple state-of-the-art MoE VLMs, the method significantly reduces memory footprint compared to uniform quantization baselines while preserving near-SOTA performance on the VLMEvalKit benchmark, with an average accuracy drop of less than 0.8%. This demonstrates that expert-level adaptive quantization achieves an effective trade-off between efficiency and fidelity in MoE VLM deployment.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language and Vision Models using a Mixture-of-Experts (MoE) architecture pose significant challenges for deployment due to their computational and memory demands. Mixed Precision Quantization assigns different precisions to different layers of an LLM/VLM based on layer sensitivity and importance within the model. In this work, we propose a Post Training Quantization algorithm, MoPEQ, that assigns optimal bit width to each expert. Our method balances accuracy and model size by analyzing each expert's sensitivity using Hessian trace approximation instead of relying on the activation frequency of the expert. This per-expert granularity approach clusters similar experts to maintain model performance while reducing memory requirements. The experimental results on VLMEvalKit benchmark datasets using State-of-the-art VLMs Deepseek-VL2 -tiny, -small, -base, and MolmoE models demonstrate that our mixed precision quantized MoEs achieve competitive accuracy with substantial improvements in memory footprint compared to uniform-precision baseline methods. We perform a comprehensive study to analyze the impact of expert activation frequency and sensitivity using Hessian trace approximation at both layer-wise and model-wide expert precision allocation of 2, 3, and 4 bits to provide a thorough understanding of mixed precision quantization of VLM-MoEs.
Problem

Research questions and friction points this paper is trying to address.

Optimizing expert bit-width allocation in MoE models
Reducing memory footprint while maintaining model accuracy
Analyzing expert sensitivity via Hessian trace approximation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixed precision quantization per expert
Hessian trace sensitivity analysis
Clustering similar experts for efficiency
Krishna Teja Chitty-Venkata
Krishna Teja Chitty-Venkata
ML Research Engineer @ Red Hat
Large Language ModelsQuantizationNeural Architecture SearchPruning
J
Jie Ye
Illinois Institute of Technology
M
Murali Emani
Argonne National Laboratory