π€ AI Summary
This study investigates the scalability of fine-grained Mixture-of-Experts (MoE) architectures in ultra-large language models (56B total parameters, 17B activated per token). To address training instability, slow convergence, and downstream performance bottlenecks, we propose a practical recipe tailored for billion-scale training: a sparsely activated fine-grained MoE structure, dynamic top-k routing, load-balancing mechanisms, and distributed optimization strategies. We present the first empirical validation of fine-grained MoE superiority at the >50B parameter scale, uncovering principled design trade-offs among expert count, expert size, and routing strategy. Experiments demonstrate an 8.2% reduction in validation loss and a 2.4% average accuracy gain across multiple benchmarks (MMLU, ARC, HellaSwag) versus dense baselines, alongside markedly improved training stability. Our work establishes a robust pathway for efficient training and deployment ofηΎδΊΏ-parameter MoE models.
π Abstract
Mixture of Experts (MoE) architectures have emerged as pivotal for scaling Large Language Models (LLMs) efficiently. Fine-grained MoE approaches - utilizing more numerous, smaller experts - have demonstrated potential in improving model convergence and quality. This work proposes a set of training recipes and provides a comprehensive empirical evaluation of fine-grained MoE, directly comparing its scaling properties against standard MoE configurations for models with up to 56B total (17B active) parameters. We investigate convergence speed, model performance on downstream benchmarks, and practical training considerations across various setups. Overall, at the largest scale we show that fine-grained MoE achieves better validation loss and higher accuracy across a set of downstream benchmarks. This study offers empirical grounding and practical insights for leveraging fine-grained MoE in the development of future large-scale models.