Towards Greater Leverage: Scaling Laws for Efficient Mixture-of-Experts Language Models

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing research lacks a predictive, quantitative model of Mixture-of-Experts (MoE) model capabilities. Method: We propose the Efficiency Lever (EL) scaling law—the first systematic framework quantifying the nonlinear interplay among expert activation ratio, total compute budget, and expert granularity; the first two follow power-law scaling, while the latter acts as a nonlinear modulator. Validated across >300 large-scale empirical model trainings, the law demonstrates broad universality. Contribution/Results: Leveraging EL, we design Ling-mini-beta—a 0.85B active-parameter MoE model trained on 1T high-quality tokens—achieving performance on par with a 6.1B dense model while reducing computational cost by over 7×. This work establishes the first predictive, interpretable scaling framework for controllable MoE scaling and efficient deployment.

Technology Category

Application Category

📝 Abstract
Mixture-of-Experts (MoE) has become a dominant architecture for scaling Large Language Models (LLMs) efficiently by decoupling total parameters from computational cost. However, this decoupling creates a critical challenge: predicting the model capacity of a given MoE configurations (e.g., expert activation ratio and granularity) remains an unresolved problem. To address this gap, we introduce Efficiency Leverage (EL), a metric quantifying the computational advantage of an MoE model over a dense equivalent. We conduct a large-scale empirical study, training over 300 models up to 28B parameters, to systematically investigate the relationship between MoE architectural configurations and EL. Our findings reveal that EL is primarily driven by the expert activation ratio and the total compute budget, both following predictable power laws, while expert granularity acts as a non-linear modulator with a clear optimal range. We integrate these discoveries into a unified scaling law that accurately predicts the EL of an MoE architecture based on its configuration. To validate our derived scaling laws, we designed and trained Ling-mini-beta, a pilot model for Ling-2.0 series with only 0.85B active parameters, alongside a 6.1B dense model for comparison. When trained on an identical 1T high-quality token dataset, Ling-mini-beta matched the performance of the 6.1B dense model while consuming over 7x fewer computational resources, thereby confirming the accuracy of our scaling laws. This work provides a principled and empirically-grounded foundation for the scaling of efficient MoE models.
Problem

Research questions and friction points this paper is trying to address.

Predicting model capacity of MoE configurations remains unresolved
Quantifying computational advantage of MoE over dense models
Establishing scaling laws for efficient MoE model design
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introducing Efficiency Leverage (EL) metric
Empirical study on MoE configurations
Unified scaling law for MoE
🔎 Similar Papers
No similar papers found.
Changxin Tian
Changxin Tian
Renmin University of China & Ant Group
Large Language Models
K
Kunlong Chen
Ling Team, Ant Group
J
Jia Liu
Ling Team, Ant Group
Z
Ziqi Liu
Ling Team, Ant Group
Z
Zhiqiang Zhang
Ling Team, Ant Group
J
Jun Zhou
Ling Team, Ant Group