🤖 AI Summary
This work addresses the memory bottleneck in training large-scale Mixture-of-Experts (MoE) models, which is primarily caused by routing buffers and materialized intermediate activations that constrain batch size and sequence length. The authors propose a co-optimized system design that eliminates redundant storage through end-to-end token scheduling and buffer-free data structures. By integrating custom GPU kernels with an intelligent activation checkpointing strategy, this approach achieves the first joint optimization of data structures, kernels, and checkpointing in MoE training, thereby completely avoiding intermediate activation materialization. Compared to existing frameworks, the proposed method reduces memory consumption by over 50% and accelerates training by more than 4×.
📝 Abstract
The pervasive"memory wall"bottleneck is significantly amplified in modern large-scale Mixture-of-Experts (MoE) architectures. MoE's inherent architectural sparsity leads to sparse arithmetic compute and also introduces substantial activation memory overheads -- driven by large token routing buffers and the need to materialize and buffer intermediate tensors. This memory pressure limits the maximum batch size and sequence length that can fit on GPUs, and also results in excessive data movements that hinders performance and efficient model scaling. We present MoEBlaze, a memory-efficient MoE training framework that addresses these issues through a co-designed system approach: (i) an end-to-end token dispatch and MoE training method with optimized data structures to eliminate intermediate buffers and activation materializing, and (ii) co-designed kernels with smart activation checkpoint to mitigate memory footprint while simultaneously achieving better performance. We demonstrate that MoEBlaze can achieve over 4x speedups and over 50% memory savings compared to existing MoE frameworks.