Scalable Training of Mixture-of-Experts Models with Megatron Core

📅 2026-03-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the coupled memory, communication, and computation bottlenecks in large-scale Mixture-of-Experts (MoE) model training by proposing a full-stack co-optimization framework. Central to this approach is the Parallel Folding multi-dimensional parallelism strategy, which integrates fine-grained recomputation, expert scheduling and offloading, Grouped GEMM kernel fusion, CUDA Graphs, and FP8/NVFP4 low-precision training to enable highly efficient overlap of communication and computation. The framework supports scalable training of MoE models ranging from billions to trillions of parameters across thousands of GPUs. On NVIDIA GB300/GB200 systems, it achieves 1,233/1,048 TFLOPS/GPU for DeepSeek-V3-685B and 974/919 TFLOPS/GPU for Qwen3-235B, significantly advancing the system efficiency and accessibility of large-scale MoE training.

Technology Category

Application Category

📝 Abstract
Scaling Mixture-of-Experts (MoE) training introduces systems challenges absent in dense models. Because each token activates only a subset of experts, this sparsity allows total parameters to grow much faster than per-token computation, creating coupled constraints across memory, communication, and computation. Optimizing one dimension often shifts pressure to another, demanding co-design across the full system stack. We address these challenges for MoE training through integrated optimizations spanning memory (fine-grained recomputation, offloading, etc.), communication (optimized dispatchers, overlapping, etc.), and computation (Grouped GEMM, fusions, CUDA Graphs, etc.). The framework also provides Parallel Folding for flexible multi-dimensional parallelism, low-precision training support for FP8 and NVFP4, and efficient long-context training. On NVIDIA GB300 and GB200, it achieves 1,233/1,048 TFLOPS/GPU for DeepSeek-V3-685B and 974/919 TFLOPS/GPU for Qwen3-235B. As a performant, scalable, and production-ready open-source solution, it has been used across academia and industry for training MoE models ranging from billions to trillions of parameters on clusters scaling up to thousands of GPUs. This report explains how these techniques work, their trade-offs, and their interactions at the systems level, providing practical guidance for scaling MoE models with Megatron Core.
Problem

Research questions and friction points this paper is trying to address.

Mixture-of-Experts
scalable training
system constraints
memory-computation-communication trade-off
large-scale MoE
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture-of-Experts
Scalable Training
System Co-design
Low-precision Training
Multi-dimensional Parallelism
🔎 Similar Papers
No similar papers found.