Fiddler: CPU-GPU Orchestration for Fast Inference of Mixture-of-Experts Models

📅 2024-02-10
🏛️ arXiv.org
📈 Citations: 10
Influential: 1
📄 PDF
🤖 AI Summary
To address low inference efficiency and high CPU-GPU data migration overhead in memory-constrained GPU environments for Mixture-of-Experts (MoE) large language models, this paper proposes a fine-grained CPU-GPU collaborative orchestration framework. Our method jointly optimizes computation and memory access characteristics via runtime heterogeneous resource-aware scheduling, expert-level operator offloading decisions, zero-copy memory sharing, and adaptive batching with cache management. It uniformly accelerates multiple inference modes—including prefill, decode, and beam search—overcoming the limitations of conventional static offloading paradigms. Experiments demonstrate significant improvements over state-of-the-art systems: 1.26× higher throughput per batch, 1.30× speedup for long-sequence prefill, and up to 11.57× acceleration for beam search. The implementation is open-sourced.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) with the Mixture-of-Experts (MoE) architectures have shown promising performance on various tasks. However, due to the huge model sizes, running them in resource-constrained environments where the GPU memory is not abundant is challenging. Some existing systems propose to use CPU resources to solve that, but they either suffer from the significant overhead of frequently moving data between CPU and GPU, or fail to consider distinct characteristics of CPUs and GPUs. This paper proposes Fiddler, a resource-efficient inference system for MoE models with limited GPU resources. Fiddler strategically utilizes CPU and GPU resources by determining the optimal execution strategy. Our evaluation shows that, unlike state-of-the-art systems that optimize for specific scenarios such as single batch inference or long prefill, Fiddler performs better in all scenarios. Compared against different baselines, Fiddler achieves 1.26 times speed up in single batch inference, 1.30 times in long prefill processing, and 11.57 times in beam search inference. The code of Fiddler is publicly available at https://github.com/efeslab/fiddler.
Problem

Research questions and friction points this paper is trying to address.

Optimizes CPU-GPU resource use for MoE model inference
Reduces overhead of data transfer between CPU and GPU
Improves performance in various inference scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

CPU-GPU orchestration for MoE models
Optimal execution strategy determination
Efficient inference in limited GPU resources
🔎 Similar Papers
No similar papers found.