HarMoEny: Efficient Multi-GPU Inference of MoE Models

πŸ“… 2025-06-14
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address high latency in multi-GPU Mixture-of-Experts (MoE) inference caused by imbalanced expert and GPU load, this paper proposes a synergistic optimization of dynamic token redistribution and asynchronous expert prefetching. The method comprises: (1) a real-time GPU-load-aware dynamic token redirection mechanism that intelligently routes incoming requests to underutilized GPUs; and (2) system-level asynchronous prefetching of expert weights, decoupling computation from I/O-bound weight loading. Together, these techniques achieve near-optimal inter-GPU load balancing, substantially reducing GPU idle time and first-token latency. Experimental results demonstrate that, compared to the best baseline, our approach improves throughput by 37%–70%, reduces first-token latency by 34%–41%, and decreases GPU idle time by up to 84%.

Technology Category

Application Category

πŸ“ Abstract
Mixture-of-Experts (MoE) models offer computational efficiency during inference by activating only a subset of specialized experts for a given input. This enables efficient model scaling on multi-GPU systems that use expert parallelism without compromising performance. However, load imbalance among experts and GPUs introduces waiting times, which can significantly increase inference latency. To address this challenge, we propose HarMoEny, a novel solution to address MoE load imbalance through two simple techniques: (i) dynamic token redistribution to underutilized GPUs and (ii) asynchronous prefetching of experts from the system to GPU memory. These techniques achieve a near-perfect load balance among experts and GPUs and mitigate delays caused by overloaded GPUs. We implement HarMoEny and compare its latency and throughput with four MoE baselines using real-world and synthetic datasets. Under heavy load imbalance, HarMoEny increases throughput by 37%-70% and reduces time-to-first-token by 34%-41%, compared to the next-best baseline. Moreover, our ablation study demonstrates that HarMoEny's scheduling policy reduces the GPU idling time by up to 84% compared to the baseline policies.
Problem

Research questions and friction points this paper is trying to address.

Addressing load imbalance in MoE models during multi-GPU inference
Reducing inference latency caused by expert and GPU waiting times
Improving throughput and GPU utilization in expert-parallel systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic token redistribution for load balance
Asynchronous prefetching of experts to GPUs
Scheduling policy reduces GPU idling time
πŸ”Ž Similar Papers
No similar papers found.