🤖 AI Summary
Large Mixture-of-Experts (MoE) models suffer from prohibitive memory overhead due to the need to store all expert parameters, hindering practical deployment. To address this, we propose a training-free, highly efficient compression framework. First, we perform sparse expert merging guided by element-wise weight redundancy analysis. Second, we introduce a dual-mask mechanism to explicitly model both parameter sharing across experts and expert-specific characteristics. Third, we design a bit-level compression encoding scheme that reuses the exponent bits of floating-point numbers to reduce storage footprint and enable native GPU acceleration. Our method achieves 50% compression without accuracy loss; on MMLU, it outperforms the state-of-the-art by 16.7% and accelerates inference by up to 1.28×, significantly alleviating the memory bottleneck in MoE models.
📝 Abstract
Mixture-of-Experts (MoE) models have shown strong potential in scaling language models efficiently by activating only a small subset of experts per input. However, their widespread deployment remains limited due to the high memory overhead associated with storing all expert parameters, particularly as the number of experts increases. To address this challenge, prior works have explored expert dropping and merging strategies, yet they often suffer from performance drop at high compression ratios. In this paper, we introduce PuzzleMoE, a training-free MoE compression method that achieves both high accuracy and efficient inference through two key innovations: First, PuzzleMoE performs sparse expert merging by identifying element-wise weight redundancy and specialization. It uses a dual-mask to capture both shared and expert-specific parameters. Second, to avoid the overhead of storing binary masks and signs, PuzzleMoE introduces a bit-packed encoding scheme that reuses underutilized exponent bits, enabling efficient MoE inference on GPUs. Extensive experiments demonstrate that PuzzleMoE can compress MoE models by up to 50% while maintaining accuracy across various tasks. Specifically, it outperforms prior MoE compression methods by up to 16.7% on MMLU at 50% compression ratio, and achieves up to 1.28 imes inference speedup.