Sigma-Moe-Tiny Technical Report

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In highly sparse Mixture-of-Experts (MoE) models, severe expert load imbalance undermines training stability, rendering conventional load-balancing losses ineffective. Method: This work proposes a progressive sparsification scheduling strategy to construct Sigma-MoE-Tiny—an ultra-sparse MoE model with 96 experts per layer and only one expert activated per token—achieving just 0.5B active parameters out of a total of 20B. It integrates fine-grained Megatron-LM MoE architecture, dynamic expert activation scheduling, an improved load-balancing optimization objective, and a high-quality pretraining/alignment pipeline. Contribution/Results: Sigma-MoE-Tiny achieves the highest structural sparsity (1/96) among open-source MoE models while maintaining zero training crashes throughout the entire process. Load balancing is significantly enhanced, eliminating irrecoverable loss spikes. Despite activating only 0.5B parameters, it outperforms mainstream dense and MoE models of comparable or even several-times-larger parameter counts on downstream benchmarks.

Technology Category

Application Category

📝 Abstract
Mixture-of-Experts (MoE) has emerged as a promising paradigm for foundation models due to its efficient and powerful scalability. In this work, we present Sigma-MoE-Tiny, an MoE language model that achieves the highest sparsity compared to existing open-source models. Sigma-MoE-Tiny employs fine-grained expert segmentation with up to 96 experts per layer, while activating only one expert for each token, resulting in 20B total parameters with just 0.5B activated. The major challenge introduced by such extreme sparsity lies in expert load balancing. We find that the widely-used load balancing loss tends to become ineffective in the lower layers under this setting. To address this issue, we propose a progressive sparsification schedule aiming to balance expert utilization and training stability. Sigma-MoE-Tiny is pre-trained on a diverse and high-quality corpus, followed by post-training to further unlock its capabilities. The entire training process remains remarkably stable, with no occurrence of irrecoverable loss spikes. Comprehensive evaluations reveal that, despite activating only 0.5B parameters, Sigma-MoE-Tiny achieves top-tier performance among counterparts of comparable or significantly larger scale. In addition, we provide an in-depth discussion of load balancing in highly sparse MoE models, offering insights for advancing sparsity in future MoE architectures. Project page: https://qghuxmu.github.io/Sigma-MoE-Tiny Code: https://github.com/microsoft/ltp-megatron-lm
Problem

Research questions and friction points this paper is trying to address.

Addresses expert load balancing in highly sparse MoE models
Proposes progressive sparsification to stabilize training in sparse settings
Achieves top performance with extreme sparsity using fine-grained segmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-grained expert segmentation with 96 experts per layer
Progressive sparsification schedule for expert load balancing
Activating only 0.5B parameters out of 20B total parameters
🔎 Similar Papers
No similar papers found.