Multi-Layer Scheduling for MoE-Based LLM Reasoning

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of low resource utilization, head-of-line blocking, and load imbalance in inference for Mixture-of-Experts (MoE) large language models, where conventional scheduling struggles with the complexity of expert parallelism and dynamic routing. The authors propose the first three-tier cooperative scheduling framework spanning the request, engine, and expert layers, integrating novel techniques including priority aging, shortest job first (SJF), KV cache- and prefix-load-aware engine scheduling, and expert dependency-aware placement optimization. Extensive experiments across over one hundred test configurations demonstrate significant improvements over vLLM, achieving up to 17.8% reduction in time-to-first-token (TTFT) latency and up to 13.3% reduction in time-per-output-token (TPOT) latency, thereby substantially enhancing throughput, reducing latency, and improving resource utilization.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have achieved remarkable success across a wide range of tasks, but serving them efficiently at scale remains a critical challenge due to their substantial computational and latency demands. While most existing inference frameworks rely on simple scheduling strategies such as First-Come-First-Serve (FCFS) at the engine level and Round-Robin (RR) at the scheduler or coordinator level, they often fail to fully utilize system resources and may suffer from issues such as head-of-line blocking and load imbalance. Recent advances in Mixture-of-Experts (MoE) models have also introduced new challenges in scheduling arising from expert parallelism and routing complexity. This research proposes a multi-layer scheduling framework tailored for MoE-based LLM serving. It targets scheduling at three levels: request-level, enginelevel, and expert-level. At the request level, we explore algorithms such as Shortest-Job-First (SJF) and priority-aware aging to improve throughput and reduce latency. At the engine level, we design load-aware dispatching strategies that account for the current prefix token load, KV cache utilization, and user stickiness to achieve better resource matching. At the expert level, we focus on alleviating expert hotspots and strategically placing inter-layer expert dependencies to balance load and improve routing efficiency. Extensive experimental results from more than 100 experiments conducted under diverse workload distributions show that our approach consistently outperforms the state-of-theart inference framework vLLM, achieving up to 17.8% reduction in Time To First Token (TTFT) latency and 13.3% reduction in Time-Per-Output-Token (TPOT) latency.
Problem

Research questions and friction points this paper is trying to address.

MoE-based LLM
multi-layer scheduling
load imbalance
expert parallelism
inference latency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-layer scheduling
Mixture-of-Experts (MoE)
LLM inference
Load balancing
Expert routing
🔎 Similar Papers
No similar papers found.