🤖 AI Summary
Existing LLM schedulers struggle to simultaneously satisfy diverse SLO requirements—such as low latency for streaming chat, high throughput for tool invocation, and dynamic dependency handling for agent-based inference—leading to suboptimal service gain. This paper introduces the first SLO-aware scheduler, explicitly designed to maximize service gain. We propose a novel quantitative metric—*service gain*—and establish an SLO-aware scheduling paradigm grounded in this objective. Methodologically, our approach combines conservative initial admission control via quantile-based response-time upper-bound estimation and dependency-graph matching, prioritizes requests by gain density, and dynamically reallocates resources based on online generation feedback. Evaluated across diverse real-world LLM workloads, our scheduler achieves up to an 8.3× improvement in end-to-end service gain and up to a 10.3× increase in SLO-compliant throughput.
📝 Abstract
The integration of Large Language Models (LLMs) into diverse applications, ranging from interactive chatbots and cloud AIOps to intelligent agents, has introduced a wide spectrum of Service Level Objectives (SLOs) for responsiveness. These workloads include latency-sensitive requests focused on per-token latency in streaming chat, throughput-intensive requests that require rapid full responses to invoke tools, and collective requests with dynamic dependencies arising from self-reflection or agent-based reasoning. This workload diversity, amplified by unpredictable request information such as response lengths and runtime dependencies, makes existing schedulers inadequate even within their design envelopes. In this paper, we define service gain as the useful service delivered by completing requests. We observe that as SLO directly reflects the actual performance needs of requests, completing a request much faster than its SLO (e.g., deadline) yields limited additional service gain. Based on this insight, we introduce Tempo, the first systematic SLO-aware scheduler designed to maximize service gain across diverse LLM workloads. Tempo allocates just enough serving bandwidth to meet each SLO, maximizing residual capacity for others best-effort workloads. Instead of assuming request information or none at all, it adopts a hybrid scheduling strategy: using quantile-based response upper bounds and dependency-graph matching for conservative initial estimates, prioritizing requests by service gain density, and refining decisions online as generation progresses. Our evaluation across diverse workloads, including chat, reasoning, and agentic pipelines, shows that Tempo improves end-to-end service gain by up to 8.3$ imes$ and achieves up to 10.3$ imes$ SLO goodput compared to state-of-the-art designs