🤖 AI Summary
To address GPU memory exhaustion and PCIe transfer latency caused by expert prefetching failures in MoE model inference, this paper proposes an acceleration method leveraging expert redundancy. The core innovation is the first introduction of a dynamic functional-substitution mechanism: when the target expert misses the cache, a semantically similar expert—already loaded—is dynamically scheduled for inference, eliminating stalls or performance degradation. The method comprises expert similarity modeling, runtime redundant scheduling, CPU-GPU collaborative execution, lightweight prefetch prediction, and cache-aware expert placement. Experiments under memory-constrained settings demonstrate that our approach reduces end-to-end latency by up to 42%, improves throughput by up to 3.1×, and maintains accuracy loss below 0.3%, closely approaching the performance of full-expert loading.
📝 Abstract
Mixture-of-Experts (MoE) architectures scale language models by activating only a subset of specialized expert networks for each input token, thereby reducing the number of floating-point operations. However, the growing size of modern MoE models causes their full parameter sets to exceed GPU memory capacity; for example, Mixtral-8x7B has 45 billion parameters and requires 87 GB of memory even though only 14 billion parameters are used per token. Existing systems alleviate this limitation by offloading inactive experts to CPU memory, but transferring experts across the PCIe interconnect incurs significant latency (about 10 ms). Prefetching heuristics aim to hide this latency by predicting which experts are needed, but prefetch failures introduce significant stalls and amplify inference latency. In the event of a prefetch failure, prior work offers two primary solutions: either fetch the expert on demand, which incurs a long stall due to the PCIe bottleneck, or drop the expert from the computation, which significantly degrades model accuracy. The critical challenge, therefore, is to maintain both high inference speed and model accuracy when prefetching fails.