🤖 AI Summary
To address the high inference cost and deployment challenges posed by large numbers of experts in Mixture-of-Experts (MoE) models, this paper proposes a novel “structured (expert-level) first, then unstructured” joint pruning paradigm. Our key contributions are: (1) the first empirical finding that expert-level structured pruning—applied prior to unstructured pruning—significantly improves accuracy; (2) an implicit structural modeling method based on expert behavioral similarity, reducing pruning complexity from O(kⁿ/√n) to O(1) and enabling efficient, scalable pruning; and (3) a staged pruning pipeline integrating behavioral clustering, greedy joint decision-making, and fine-grained weight pruning. Evaluated on the 480B-parameter Snowflake Arctic MoE model (128 experts), our method achieves 40% sparsity using only one H100 GPU for two hours of training, with near-lossless performance on benchmarks such as GSM8K—substantially outperforming state-of-the-art unstructured pruning methods.
📝 Abstract
Mixture-of-experts (MoEs) have been adopted for reducing inference costs by sparsely activating experts in Large language models (LLMs). Despite this reduction, the massive number of experts in MoEs still makes them expensive to serve. In this paper, we study how to address this, by pruning MoEs. Among pruning methodologies, unstructured pruning has been known to achieve the highest performance for a given pruning ratio, compared to structured pruning, since the latter imposes constraints on the sparsification structure. This is intuitive, as the solution space of unstructured pruning subsumes that of structured pruning. However, our counterintuitive finding reveals that expert pruning, a form of structured pruning, can actually precede unstructured pruning to outperform unstructured-only pruning. As existing expert pruning, requiring $O(frac{k^n}{sqrt{n}})$ forward passes for $n$ experts, cannot scale for recent MoEs, we propose a scalable alternative with $O(1)$ complexity, yet outperforming the more expensive methods. The key idea is leveraging a latent structure between experts, based on behavior similarity, such that the greedy decision of whether to prune closely captures the joint pruning effect. Ours is highly effective -- for Snowflake Arctic, a 480B-sized MoE with 128 experts, our method needs only one H100 and two hours to achieve nearly no loss in performance with 40% sparsity, even in generative tasks such as GSM8K, where state-of-the-art unstructured pruning fails to. The code will be made publicly available.