BuddyMoE: Exploiting Expert Redundancy to Accelerate Memory-Constrained Mixture-of-Experts Inference

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address GPU memory exhaustion and PCIe transfer latency caused by expert prefetching failures in MoE model inference, this paper proposes an acceleration method leveraging expert redundancy. The core innovation is the first introduction of a dynamic functional-substitution mechanism: when the target expert misses the cache, a semantically similar expert—already loaded—is dynamically scheduled for inference, eliminating stalls or performance degradation. The method comprises expert similarity modeling, runtime redundant scheduling, CPU-GPU collaborative execution, lightweight prefetch prediction, and cache-aware expert placement. Experiments under memory-constrained settings demonstrate that our approach reduces end-to-end latency by up to 42%, improves throughput by up to 3.1×, and maintains accuracy loss below 0.3%, closely approaching the performance of full-expert loading.

Technology Category

Application Category

📝 Abstract
Mixture-of-Experts (MoE) architectures scale language models by activating only a subset of specialized expert networks for each input token, thereby reducing the number of floating-point operations. However, the growing size of modern MoE models causes their full parameter sets to exceed GPU memory capacity; for example, Mixtral-8x7B has 45 billion parameters and requires 87 GB of memory even though only 14 billion parameters are used per token. Existing systems alleviate this limitation by offloading inactive experts to CPU memory, but transferring experts across the PCIe interconnect incurs significant latency (about 10 ms). Prefetching heuristics aim to hide this latency by predicting which experts are needed, but prefetch failures introduce significant stalls and amplify inference latency. In the event of a prefetch failure, prior work offers two primary solutions: either fetch the expert on demand, which incurs a long stall due to the PCIe bottleneck, or drop the expert from the computation, which significantly degrades model accuracy. The critical challenge, therefore, is to maintain both high inference speed and model accuracy when prefetching fails.
Problem

Research questions and friction points this paper is trying to address.

Accelerating MoE inference under GPU memory constraints
Reducing latency from expert offloading across PCIe interconnect
Maintaining model accuracy when prefetching mechanisms fail
Innovation

Methods, ideas, or system contributions that make the work stand out.

Exploits expert redundancy to accelerate inference
Maintains accuracy during prefetch failures
Reduces latency in memory-constrained MoE systems
Y
Yun Wang
Shanghai Jiao Tong University, Shanghai, China
Lingyun Yang
Lingyun Yang
Ph.D., Hong Kong University of Science and Technology
Machine Learning SystemsGPU Cluster Management
S
Senhao Yu
Shanghai Jiao Tong University, Shanghai, China
Y
Yixiao Wang
Shanghai Jiao Tong University, Shanghai, China
R
Ruixing Li
Shanghai Jiao Tong University, Shanghai, China
Z
Zhixiang Wei
Shanghai Jiao Tong University, Shanghai, China
J
James Yen
Shanghai Jiao Tong University, Shanghai, China
Zhengwei Qi
Zhengwei Qi
Professor of Computer Science, Shanghai Jiao Tong University
system softwareprogram analysiscloud computing