🤖 AI Summary
This work addresses the challenges of low expert utilization and computational inefficiency in pretraining large Mixture-of-Experts (MoE) language models by proposing a layer-adaptive expert pruning algorithm. The method introduces, for the first time, a dynamic sparsification mechanism during pretraining that evaluates expert utilization based on token distribution and adaptively prunes redundant experts on a per-layer basis. It further optimizes computational resource allocation through cross-device expert reassignment. When applied to training a 101B-parameter base model from scratch, the approach achieves a 48.3% improvement in training efficiency and a 33.3% reduction in model parameters, while maintaining strong performance across multiple downstream tasks.
📝 Abstract
Although Mixture-of-Experts (MoE) Large Language Models (LLMs) deliver superior accuracy with a reduced number of active parameters, their pre-training represents a significant computationally bottleneck due to underutilized experts and limited training efficiency. This work introduces a Layer-Adaptive Expert Pruning (LAEP) algorithm designed for the pre-training stage of MoE LLMs. In contrast to previous expert pruning approaches that operate primarily in the post-training phase, the proposed algorithm enhances training efficiency by selectively pruning underutilized experts and reorganizing experts across computing devices according to token distribution statistics. Comprehensive experiments demonstrate that LAEP effectively reduces model size and substantially improves pre-training efficiency. In particular, when pre-training the Yuan3.0-1T Base model from scratch original with 1515B parameters, LAEP achieves a 48.3% improvement in training efficiency alongside a 33.3% parameter reduction, while still delivering excellent performance across multiple domains.