🤖 AI Summary
Existing LLM serving systems prioritize throughput over SLO compliance, resulting in low达标 rates for TTFT (Time-To-First-Token) and TPOT (Time-Per-Output-Token). This paper proposes an SLO-oriented heterogeneous scheduling framework featuring a novel dual-guard mechanism: the TTFT Guard enforces deadline-aware request reordering and rejects infeasible requests, while the TPOT Guard integrates Virtual Batch Size (VBS)-based admission control with credit-driven dynamic batching. A lightweight, SLO-aware online prediction module orchestrates joint optimization of goodput and SLO attainment across admission, queuing, and batching stages. Experiments demonstrate that, compared to state-of-the-art baselines, our system achieves up to a 14.4× improvement in goodput and up to a 46.5% increase in the joint TTFT/TPOT SLO compliance rate.
📝 Abstract
Existing Large Language Model (LLM) serving systems prioritize maximum throughput. They often neglect Service Level Objectives (SLOs) such as Time to First Token (TTFT) and Time Per Output Token (TPOT), which leads to suboptimal SLO attainment. This paper introduces SCORPIO, an SLO-oriented LLM serving system designed to maximize system goodput and SLO attainment for workloads with heterogeneous SLOs. Our core insight is to exploit SLO heterogeneity for adaptive scheduling across admission control, queue management, and batch selection. SCORPIO features a TTFT Guard, which employs least-deadline-first reordering and rejects unattainable requests, and a TPOT Guard, which utilizes a VBS-based admission control and a novel credit-based batching mechanism. Both guards are supported by a predictive module. Evaluations demonstrate that SCORPIO improves system goodput by up to 14.4X and SLO adherence by up to 46.5% compared to state-of-the-art baselines.