fMoE: Fine-Grained Expert Offloading for Large Mixture-of-Experts Serving

๐Ÿ“… 2025-02-07
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Sparse activation in Mixture-of-Experts (MoE) large language models leads to memory inefficiency during inference, and existing offloading strategies fail to simultaneously achieve low latency and low memory footprint. Method: We propose a fine-grained expert offloading mechanism that introduces the first joint modeling framework integrating expert activation patterns with input prompt semantics, enabling dynamic prefetching, hierarchical caching, and coordinated scheduling across heterogeneous memory (GPU/CPU). Contribution/Results: Our approach breaks the latencyโ€“memory trade-off inherent in coarse-grained offloading. Evaluated on a six-GPU system, it reduces end-to-end inference latency by 47% and improves expert hit rate by 36%, significantly outperforming state-of-the-art methods.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language Models (LLMs) have gained immense success in revolutionizing various applications, including content generation, search and recommendation, and AI-assisted operation. To reduce high training costs, Mixture-of-Experts (MoE) architecture has become a popular backbone for modern LLMs. However, despite the benefits, serving MoE-based LLMs experience severe memory inefficiency due to sparsely activated experts. Recent studies propose to offload inactive experts from GPU memory to CPU memory to improve the serving efficiency of MoE models. However, they either incur high inference latency or high model memory footprints due to coarse-grained designs. To tame the latency-memory trade-off in MoE serving, we present fMoE, a fine-grained expert offloading system for MoE serving that achieves low inference latency with memory efficiency. We design fMoE to extract fine-grained expert selection patterns from MoE models and semantic hints from input prompts to efficiently guide expert prefetching, caching, and offloading decisions. fMoE is prototyped on top of HuggingFace Transformers and deployed on a six-GPU testbed. Experiments with open-source MoE models and real-world workloads show that fMoE reduces inference latency by 47% and improves expert hit rate by 36% over state-of-the-art solutions.
Problem

Research questions and friction points this paper is trying to address.

Reduces memory inefficiency in MoE-based LLMs
Optimizes expert offloading for low inference latency
Improves expert hit rate with fine-grained strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-grained expert offloading system
Semantic hints for expert decisions
Reduced inference latency significantly
Hanfei Yu
Hanfei Yu
Stevens Institute of Technology
Serverless ComputingLarge-Scale AI SystemsDistributed ML SystemsLLM Systems
X
Xingqi Cui
Rice University
H
Hong Zhang
University of Waterloo
H
Hao Wang
Rutgers University, Stevens Institute of Technology