MC#: Mixture Compressor for Mixture-of-Experts Large Models

πŸ“… 2025-10-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the high storage and computational overhead induced by expert modules in Mixture-of-Experts (MoE) large language models, this paper proposes MC#, a novel compression framework that jointly integrates Per-Module Quantization (PMQ)β€”a mixed-precision static quantization schemeβ€”and Online Top-*k* Pruning (OTP), a differentiable dynamic routing sparsification method. PMQ employs linear programming to optimize bit-width allocation across experts, enabling fine-grained weight compression; OTP leverages Gumbel-Softmax sampling for end-to-end trainable, sparse expert selection. Evaluated on DeepSeek-VL2, MC# compresses expert weights to an average of 2.57 bits (6.2Γ— reduction), with only a 1.7% average accuracy degradation across five multimodal tasks, over 20% reduction in expert activations, and less than 1% inference latency overhead. The framework achieves Pareto-optimal trade-offs between efficiency and accuracy.

Technology Category

Application Category

πŸ“ Abstract
Mixture-of-Experts (MoE) effectively scales large language models (LLMs) and vision-language models (VLMs) by increasing capacity through sparse activation. However, preloading all experts into memory and activating multiple experts per input introduces significant computational and memory overhead, making the expert module a major contributor to model size and inference cost. To address this, we propose MC# (Mixture-Compressor-sharp), a framework that combines static quantization and dynamic expert pruning by leveraging the significance of experts and tokens for aggressive compression of MoE-LLMs/VLMs. To reduce storage and loading costs, we introduce Pre-Loading Mixed-Precision Quantization (PMQ), which optimizes bit allocation via linear programming, balancing expert importance and quantization error for a Pareto-optimal trade-off between size and performance. To reduce runtime computation, Online Top-any Pruning (OTP) uses Gumbel-Softmax sampling to dynamically select a subset of experts per token, enabling fine-grained control over activation. By combining PMQ's static bit-width optimization with OTP's dynamic routing, MC# achieves extreme compression with minimal accuracy loss. On DeepSeek-VL2, MC# achieves a 6.2 times weight reduction at 2.57 average bits with only a 1.7% accuracy drop across five multimodal benchmarks. Additionally, OTP reduces expert activation over 20% with less than 1% performance degradation, demonstrating strong potential for efficient MoE-based model deployment.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational and memory overhead in Mixture-of-Experts models
Optimizing bit allocation via mixed-precision quantization for storage efficiency
Dynamically pruning experts per token to minimize activation costs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Static quantization optimizes bit allocation via linear programming
Dynamic expert pruning uses Gumbel-Softmax for token-wise selection
Combined approach achieves extreme compression with minimal accuracy loss
πŸ”Ž Similar Papers
No similar papers found.
W
Wei Huang
Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong SAR
Yue Liao
Yue Liao
National University of Singapore
Computer VisionDeep LearningMLLM
Yukang Chen
Yukang Chen
Research Scientist, NVIDIA
Large Language ModelsEfficient Deep LearningLong AI
Jianhui Liu
Jianhui Liu
PhD student, The University of Hong Kong
Robotic3D scene understanding6D Pose Estimation
H
Haoru Tan
Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong SAR
Si Liu
Si Liu
Fred Hutchinson Cancer Center
GenomicsBiostatisticsAnomaly DetectionOpen Category Detection
S
Shiming Zhang
Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong SAR
S
Shuicheng Yan
School of Computing, National University of Singapore, Singapore
Xiaojuan Qi
Xiaojuan Qi
Assistant Professor, The University of Hong Kong
3D VisionDeep learningArtificial IntelligenceMedical Image Analysis