Towards a Comprehensive Scaling Law of Mixture-of-Experts

πŸ“… 2025-09-28
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing dense-model scaling laws fail to characterize the non-monotonic, multi-factor coupling in Mixture-of-Experts (MoE) modelsβ€”where performance depends jointly on dataset size, total parameters, activated parameters, number of experts, and expert sharing ratio. Method: Through 446 controlled experiments spanning five critical dimensions, we empirically derive the first joint scaling law for MoE architectures, integrating theoretical analysis with systematic empirical validation. Contribution/Results: We establish that the optimal number of experts and sharing ratio are architecture-agnostic, while the activation sparsity ratio monotonically decreases with scale. Our law achieves high predictive accuracy, enabling principled, quantitative design of optimal MoE configurations. It constitutes the first theoretically grounded, systematically validated, and quantitatively actionable scaling paradigm for large-scale MoE models.

Technology Category

Application Category

πŸ“ Abstract
Mixture-of-Experts (MoE) models have become the consensus approach for enabling parameter-efficient scaling and cost-effective deployment in large language models. However, existing scaling laws for dense models are inapplicable to MoE models, which stems from three critical challenges: the multiplicity of influencing factors, their intricate coupling relationships and the non-monotonic nature of their performance impacts. They collectively necessitate a fine-grained investigation into MoE-specific scaling laws. In this work, we perform a systematic decomposition of MoE settings, identifying five key factors that influence model performance from both size and structural perspectives (data size ($D$), total model size ($N$), activated model size ($N_a$), number of active experts ($G$) and the ratio of shared experts ($S$)). Specifically, we design $446$ controlled experiments to characterize their marginal effects, ultimately constructing a comprehensive and precise joint MoE scaling law that considers all essential factors. Furthermore, we derive the theoretically optimal and practically efficiency-aware optimal configurations for $G$, $S$ and $N_a/N$ with detailed analyses. Our results demonstrate that the optimal settings for $G$ and $S$ are independent of both the model architecture and data size. With the scaling of $N$, the optimal activation parameter ratio of $N_a/N$ becomes sparser. Our proposed MoE scaling law could function as an accurate and insightful guidance to facilitate future MoE model design and training.
Problem

Research questions and friction points this paper is trying to address.

Develops scaling laws for Mixture-of-Experts models
Identifies key factors influencing MoE model performance
Determines optimal configurations for efficient MoE design
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematically decomposing MoE settings into five key factors
Designing controlled experiments to characterize marginal effects
Deriving theoretically optimal and efficiency-aware configurations
πŸ”Ž Similar Papers
No similar papers found.