🤖 AI Summary
To address memory bottlenecks in MoE model inference—arising from KV cache overhead and sparse expert activation—this paper proposes the first fine-grained task scheduling framework based on Decoupled Expert Parallelism (DEP). Methodologically, it introduces: (1) a novel fine-grained computation/communication partitioning mechanism; (2) a scheduling optimization model supporting variable granularity and sequential constraints; and (3) a scalable discrete optimization solver. The framework enables coordinated, decoupled execution of attention and expert modules across GPU clusters. Evaluated on DeepSeek-V2 and Qwen3-MoE, it achieves up to 1.61× higher throughput compared to baseline approaches. On a 32-GPU system, it delivers a 1.24× speedup—significantly outperforming existing DEP methods. The approach effectively alleviates memory pressure while improving hardware utilization and end-to-end inference efficiency.
📝 Abstract
The mixture-of-experts (MoE) architecture scales model size with sublinear computational increase but suffers from memory-intensive inference due to KV caches and sparse expert activation. Recent disaggregated expert parallelism (DEP) distributes attention and experts to dedicated GPU groups but lacks support for shared experts and efficient task scheduling, limiting performance.
We propose FinDEP, a fine-grained task scheduling algorithm for DEP that maximizes task overlap to improve MoE inference throughput. FinDEP introduces three innovations: 1) partitioning computation/communication into smaller tasks for fine-grained pipelining, 2) formulating a scheduling optimization supporting variable granularity and ordering, and 3) developing an efficient solver for this large search space.
Experiments on four GPU systems with DeepSeek-V2 and Qwen3-MoE show FinDEP improves throughput by up to 1.61x over prior methods, achieving up to 1.24x speedup on a 32-GPU system.