Efficient MoE Inference with Fine-Grained Scheduling of Disaggregated Expert Parallelism

📅 2025-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address memory bottlenecks in MoE model inference—arising from KV cache overhead and sparse expert activation—this paper proposes the first fine-grained task scheduling framework based on Decoupled Expert Parallelism (DEP). Methodologically, it introduces: (1) a novel fine-grained computation/communication partitioning mechanism; (2) a scheduling optimization model supporting variable granularity and sequential constraints; and (3) a scalable discrete optimization solver. The framework enables coordinated, decoupled execution of attention and expert modules across GPU clusters. Evaluated on DeepSeek-V2 and Qwen3-MoE, it achieves up to 1.61× higher throughput compared to baseline approaches. On a 32-GPU system, it delivers a 1.24× speedup—significantly outperforming existing DEP methods. The approach effectively alleviates memory pressure while improving hardware utilization and end-to-end inference efficiency.

Technology Category

Application Category

📝 Abstract
The mixture-of-experts (MoE) architecture scales model size with sublinear computational increase but suffers from memory-intensive inference due to KV caches and sparse expert activation. Recent disaggregated expert parallelism (DEP) distributes attention and experts to dedicated GPU groups but lacks support for shared experts and efficient task scheduling, limiting performance. We propose FinDEP, a fine-grained task scheduling algorithm for DEP that maximizes task overlap to improve MoE inference throughput. FinDEP introduces three innovations: 1) partitioning computation/communication into smaller tasks for fine-grained pipelining, 2) formulating a scheduling optimization supporting variable granularity and ordering, and 3) developing an efficient solver for this large search space. Experiments on four GPU systems with DeepSeek-V2 and Qwen3-MoE show FinDEP improves throughput by up to 1.61x over prior methods, achieving up to 1.24x speedup on a 32-GPU system.
Problem

Research questions and friction points this paper is trying to address.

Optimizes memory-intensive MoE inference by improving task scheduling efficiency
Addresses lack of shared expert support in disaggregated expert parallelism systems
Solves limited performance from coarse-grained scheduling in distributed GPU environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-grained task scheduling for disaggregated expert parallelism
Partitioning computation and communication into smaller tasks
Optimizing scheduling with variable granularity and ordering