🤖 AI Summary
MoE-based large language models face prohibitively high GPU memory overhead and cost due to full expert residency in VRAM. This paper proposes a task-aware lightweight inference system that introduces a novel task-sensitive routing prediction trigger mechanism—enabling on-demand skipping of non-critical experts per token rather than prompt-level expert loading. Integrated with SLO-driven dynamic expert scheduling and memory-compute co-offloading, the system jointly optimizes memory footprint, latency, and accuracy. Our method models expert routing behavior and leverages temporal pattern prediction, augmented by task-sensitivity discrimination to precisely identify essential experts. Experiments demonstrate an 80% reduction in GPU memory usage and a 17% decrease in end-to-end latency; it supports 40× longer context lengths and 4.5× larger batch sizes, achieving 1.5× higher throughput—all without accuracy degradation.
📝 Abstract
In recent years, Mixture-of-Experts (MoE) has emerged as an effective approach for enhancing the capacity of deep neural network (DNN) with sub-linear computational costs. However, storing all experts on GPUs incurs significant memory overhead, increasing the monetary cost of MoE-based inference. To address this, we propose eMoE, a memory efficient inference system for MoE-based large language models (LLMs) by leveraging our observations from experiment measurements. eMoE reduces memory usage by predicting and loading only the required experts based on recurrent patterns in expert routing. To reduce loading latency while maintaining accuracy, as we found using the same experts for subsequent prompts has minimal impact on perplexity, eMoE invokes the expert predictor every few prompts rather than for each prompt. In addition, it skips predictions for tasks less sensitive to routing accuracy. Finally, it has task-aware scheduling to minimize inference latency by considering Service Level Objectives (SLOs), task-specific output lengths, and expert loading latencies. Experimental results show that compared to existing systems, eMoE reduces memory consumption by up to 80% while maintaining accuracy and reduces inference latency by up to 17%. It also enables processing prompts 40x longer, batches 4.5x larger, and achieves 1.5x higher throughput.