๐ค AI Summary
To address the excessive memory overhead of fine-tuning and deploying large language models based on Mixture-of-Experts (MoE) architectures under resource-constrained settings, this paper proposes SlimMoEโa multi-stage compression framework. SlimMoE innovatively integrates structured expert pruning, staged intermediate-state knowledge distillation, parameter reorganization, and quantization-aware fine-tuning, achieving high-fidelity compression using only 400B tokens of dataโthe first such result at such a small scale. It efficiently compresses Phi-3.5-MoE (41.9B parameters) into 7.6B- and 3.8B-parameter variants, both fully compatible with single-GPU fine-tuning. The compressed models match Llama-3.1-8B in MMLU performance while significantly reducing inference latency. SlimMoE thus substantially improves training and inference efficiency in memory- and compute-limited environments.
๐ Abstract
The Mixture of Experts (MoE) architecture has emerged as a powerful paradigm for scaling large language models (LLMs) while maintaining inference efficiency. However, their enormous memory requirements make them prohibitively expensive to fine-tune or deploy in resource-constrained environments. To address this challenge, we introduce SlimMoE, a multi-stage compression framework for transforming large MoE models into much smaller, efficient variants without incurring the prohibitive costs of training from scratch. Our method systematically reduces parameter counts by slimming experts and transferring knowledge through intermediate stages, effectively mitigating the performance degradation common in one-shot pruning approaches. Using this framework, we compress Phi 3.5-MoE (41.9B total/6.6B activated parameters) to create Phi-mini-MoE (7.6B total/2.4B activated parameters) and Phi-tiny-MoE (3.8B total/1.1B activated parameters) using only 400B tokens--less than 10% of the original model's training data. These compressed models can be fine-tuned on a single GPU (A100 for Phi-mini-MoE, A6000 for Phi-tiny-MoE), making them highly suitable for academic and resource-limited settings. Our experiments demonstrate that these compressed models outperform others of similar size and remain competitive with larger models. For instance, Phi-mini-MoE achieves similar or better performance to Phi-3-mini using only 2/3 of the activated parameters and yields comparable MMLU scores to Llama 3.1 8B despite having significantly lower latency. Our findings demonstrate that structured pruning combined with staged distillation offers an effective path to creating high-quality, compact MoE models, paving the way for broader adoption of MoE architectures. We make our models publicly available at https://huggingface.co/microsoft/Phi-mini-MoE-instruct and https://huggingface.co/microsoft/Phi-tiny-MoE-instruct .