🤖 AI Summary
In DP+EP disaggregated large-model inference architectures, naive immediate scheduling induces intra-engine queuing and parallelization bubbles, degrading TTFT and limiting throughput. To address this, we propose a temporally decoupled Staggered Batch Scheduling (SBS) mechanism—the first to explicitly stagger Prefill and Decode phases along the time dimension. SBS integrates load-aware global resource allocation with DP/EP co-scheduling optimization, enabling production-grade deployment on H800 clusters. Evaluated on DeepSeek-V3 serving, SBS reduces TTFT by 30–40% and improves throughput by 15–20% over immediate scheduling baselines. Crucially, it systematically eliminates synchronization bottlenecks inherent in P/D-disaggregated architectures—without compromising throughput—a capability not previously achieved.
📝 Abstract
The evolution of Large Language Model (LLM) serving towards complex, distributed architectures--specifically the P/D-separated, large-scale DP+EP paradigm--introduces distinct scheduling challenges. Unlike traditional deployments where schedulers can treat instances as black boxes, DP+EP architectures exhibit high internal synchronization costs. We identify that immediate request dispatching in such systems leads to severe in-engine queuing and parallelization bubbles, degrading Time-to-First-Token (TTFT). To address this, we propose Staggered Batch Scheduling (SBS), a mechanism that deliberately buffers requests to form optimal execution batches. This temporal decoupling eliminates internal queuing bubbles without compromising throughput. Furthermore, leveraging the scheduling window created by buffering, we introduce a Load-Aware Global Allocation strategy that balances computational load across DP units for both Prefill and Decode phases. Deployed on a production H800 cluster serving Deepseek-V3, our system reduces TTFT by 30%-40% and improves throughput by 15%-20% compared to state-of-the-art immediate scheduling baselines.