๐ค AI Summary
Sparse activation in Mixture-of-Experts (MoE) large language models leads to memory inefficiency during inference, and existing offloading strategies fail to simultaneously achieve low latency and low memory footprint. Method: We propose a fine-grained expert offloading mechanism that introduces the first joint modeling framework integrating expert activation patterns with input prompt semantics, enabling dynamic prefetching, hierarchical caching, and coordinated scheduling across heterogeneous memory (GPU/CPU). Contribution/Results: Our approach breaks the latencyโmemory trade-off inherent in coarse-grained offloading. Evaluated on a six-GPU system, it reduces end-to-end inference latency by 47% and improves expert hit rate by 36%, significantly outperforming state-of-the-art methods.
๐ Abstract
Large Language Models (LLMs) have gained immense success in revolutionizing various applications, including content generation, search and recommendation, and AI-assisted operation. To reduce high training costs, Mixture-of-Experts (MoE) architecture has become a popular backbone for modern LLMs. However, despite the benefits, serving MoE-based LLMs experience severe memory inefficiency due to sparsely activated experts. Recent studies propose to offload inactive experts from GPU memory to CPU memory to improve the serving efficiency of MoE models. However, they either incur high inference latency or high model memory footprints due to coarse-grained designs. To tame the latency-memory trade-off in MoE serving, we present fMoE, a fine-grained expert offloading system for MoE serving that achieves low inference latency with memory efficiency. We design fMoE to extract fine-grained expert selection patterns from MoE models and semantic hints from input prompts to efficiently guide expert prefetching, caching, and offloading decisions. fMoE is prototyped on top of HuggingFace Transformers and deployed on a six-GPU testbed. Experiments with open-source MoE models and real-world workloads show that fMoE reduces inference latency by 47% and improves expert hit rate by 36% over state-of-the-art solutions.