FlashDMoE: Fast Distributed MoE in a Single Kernel

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Mixture-of-Experts (MoE) models suffer from low GPU utilization, high end-to-end latency, and poor exploitation of task locality due to CPU-dominated scheduling, host-initiated communication, and frequent CUDA kernel launches. This work introduces the first fully GPU-resident MoE operator, achieved via tight kernel–hardware co-design. It enables fine-grained pipelining of dispatch–compute–combine stages, integrates fused sparse routing with All-to-All communication within a single persistent CUDA kernel, and implements a lightweight, device-side self-triggered communication protocol. Evaluated on an 8-GPU H100 node, our approach achieves a 6× latency reduction, 5.7× throughput improvement, 4× better weak scaling efficiency, and 9× higher GPU utilization (under FP32) compared to state-of-the-art FP16 baselines.

Technology Category

Application Category

📝 Abstract
The computational sparsity of Mixture-of-Experts (MoE) models enables sub-linear growth in compute cost as model size increases, offering a scalable path to training massive neural networks. However, existing implementations suffer from emph{low GPU utilization}, emph{significant latency overhead}, and a fundamental emph{inability to leverage task locality}, primarily due to CPU-managed scheduling, host-initiated communication, and frequent kernel launches. To overcome these limitations, we develop FlashDMoE, a fully GPU-resident MoE operator that fuses expert computation and inter-GPU communication into a emph{single persistent GPU kernel}. FlashDMoE enables fine-grained pipelining of dispatch, compute, and combine phases, eliminating launch overheads and reducing idle gaps. Its device-initiated communication protocol introduces emph{payload-efficient} data transfers, significantly shrinking buffer sizes in sparsely activated MoE layers. When evaluated on a single 8-H100 GPU node with MoE models having up to 128 experts and 16K token sequences, FlashDMoE achieves up to extbf{6}x lower latency, extbf{5,7}x higher throughput, extbf{4}x better weak scaling efficiency, and extbf{9}x higher GPU utilization compared to state-of-the-art baselines, despite using FP32 while baselines use FP16. FlashDMoE demonstrates that principled GPU kernel-hardware co-design is key to unlocking the performance ceiling of large-scale distributed ML workloads.
Problem

Research questions and friction points this paper is trying to address.

Low GPU utilization in existing MoE implementations
Significant latency overhead due to CPU-managed scheduling
Inability to leverage task locality efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Single persistent GPU kernel fusion
Fine-grained pipelining phases
Payload-efficient data transfers
🔎 Similar Papers
No similar papers found.
O
Osayamen Jonathan Aimuyo
Cornell University
B
Byungsoo Oh
Cornell University
Rachee Singh
Rachee Singh
Cornell University
NetworkingNetworked Systems