Least-Loaded Expert Parallelism: Load Balancing An Imbalanced Mixture-of-Experts

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the severe expert routing imbalance in Expert Parallelism (EP) during inference or post-training, which often overloads certain devices and creates computational and memory bottlenecks. To tackle this issue, the paper introduces a dynamic rerouting mechanism—the first of its kind for non-uniform MoE routing—that migrates tokens and expert parameters from overloaded to underutilized devices while preserving model expressiveness, thereby minimizing latency and satisfying memory constraints. This approach explicitly relaxes the implicit assumption of routing balance in conventional EP and integrates a hardware-aware hyperparameter tuning framework to significantly enhance deployment efficiency. Experimental results demonstrate up to 5× speedup and a 4× reduction in peak memory usage compared to standard EP, with a 1.9× improvement in inference throughput on the gpt-oss-120b model.

Technology Category

Application Category

📝 Abstract
Mixture-of-Experts (MoE) models are typically pre-trained with explicit load-balancing constraints to ensure statistically balanced expert routing. Despite this, we observe that even well-trained MoE models exhibit significantly imbalanced routing. This behavior is arguably natural-and even desirable - as imbalanced routing allows models to concentrate domain-specific knowledge within a subset of experts. Expert parallelism (EP) is designed to scale MoE models by distributing experts across multiple devices, but with a less-discussed assumption of balanced routing. Under extreme imbalance, EP can funnel a disproportionate number of tokens to a small number of experts, leading to compute- and memory-bound failures on overloaded devices during post-training or inference, where explicit load balancing is often inapplicable. We propose Least-Loaded Expert Parallelism (LLEP), a novel EP algorithm that dynamically reroutes excess tokens and associated expert parameters from overloaded devices to underutilized ones. This ensures that all devices complete their workloads within the minimum collective latency while respecting memory constraints. Across different model scales, LLEP achieves up to 5x speedup and 4x reduction in peak memory usage compared to standard EP. This enables faster and higher-throughput post-training and inference, with ~1.9x faster for gpt-oss-120b. We support our method with extensive theoretical analysis and comprehensive empirical evaluations, including ablation studies. These results illuminate key trade-offs and enable a principled framework for hardware-specific hyper-parameter tuning to achieve optimal performance.
Problem

Research questions and friction points this paper is trying to address.

Mixture-of-Experts
Load Balancing
Expert Parallelism
Imbalanced Routing
Memory Bottleneck
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture-of-Experts
Expert Parallelism
Load Balancing
Dynamic Routing
Memory Efficiency