🤖 AI Summary
To address expert redundancy and high memory overhead in Sparse Mixture-of-Experts (SMoE) models, this paper proposes a systematic pruning framework. First, we introduce MC-Suite—the first fine-grained expert importance evaluation suite—enabling precise quantification of expert contribution. Second, we design an iterative re-evaluation pruning paradigm to prevent catastrophic capability collapse induced by one-shot pruning. Third, we identify and mitigate severe degradation in instruction-following ability post-pruning via task-agnostic fine-tuning and k-shot prompt augmentation, yielding high-performance, reproducible MoE “lottery subnetworks.” Experiments demonstrate that our method stably prunes 30% of experts while retaining over 98% of original performance across diverse benchmarks, significantly reducing memory footprint and computational cost. This work establishes a novel paradigm for efficient SMoE deployment in resource-constrained settings.
📝 Abstract
Sparsely activated Mixture-of-Experts (SMoE) has shown promise in scaling up the learning capacity of neural networks. However, vanilla SMoEs have issues such as expert redundancy and heavy memory requirements, making them inefficient and non-scalable, especially for resource-constrained scenarios. Expert-level sparsification of SMoEs involves pruning the least important experts to address these limitations. In this work, we aim to address three questions: (1) What is the best recipe to identify the least knowledgeable subset of experts that can be dropped with minimal impact on performance? (2) How should we perform expert dropping (one-shot or iterative), and what correction measures can we undertake to minimize its drastic impact on SMoE subnetwork capabilities? (3) What capabilities of full-SMoEs are severely impacted by the removal of the least dominant experts, and how can we recover them? Firstly, we propose MoE Experts Compression Suite (MC-Suite), which is a collection of some previously explored and multiple novel recipes to provide a comprehensive benchmark for estimating expert importance from diverse perspectives, as well as unveil numerous valuable insights for SMoE experts. Secondly, unlike prior works with a one-shot expert pruning approach, we explore the benefits of iterative pruning with the re-estimation of the MC-Suite criterion. Moreover, we introduce the benefits of task-agnostic fine-tuning as a correction mechanism during iterative expert dropping, which we term MoE Lottery Subnetworks. Lastly, we present an experimentally validated conjecture that, during expert dropping, SMoEs' instruction-following capabilities are predominantly hurt, which can be restored to a robust level subject to external augmentation of instruction-following capabilities using k-shot examples and supervised fine-tuning.