Ascendra: Dynamic Request Prioritization for Efficient LLM Serving

📅 2025-04-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of jointly guaranteeing dual SLOs—Time-To-First-Token (TTFT) and Time-Between-Tokens (TBT)—in large language model (LLM) serving, this paper proposes a dynamic request priority scheduling framework. Our approach integrates: (1) a time-sensitivity-aware dynamic priority migration mechanism; (2) a dual-instance GPU resource partitioning architecture, segregating low- and high-priority inference instances; and (3) an SLO-risk-prediction-driven proactive offloading and request rerouting strategy. Coupled with analytical performance modeling and low-latency kernel optimizations, our system achieves up to 1.7× higher throughput under stringent dual-SLO constraints while strictly satisfying both TTFT and TBT targets for 100% of requests—outperforming state-of-the-art systems vLLM and Sarathi-Serve.

Technology Category

Application Category

📝 Abstract
The rapid advancement of Large Language Models (LLMs) has driven the need for more efficient serving strategies. In this context, efficiency refers to the proportion of requests that meet their Service Level Objectives (SLOs), particularly for Time To First Token (TTFT) and Time Between Tokens (TBT). However, existing systems often prioritize one metric at the cost of the other. We present Ascendra, an LLM serving system designed to meet both TTFT and TBT SLOs simultaneously. The core insight behind Ascendra is that a request's urgency evolves as it approaches its deadline. To leverage this, Ascendra partitions GPU resources into two types of instances: low-priority and high-priority. Low-priority instances maximize throughput by processing requests out of arrival order, but at the risk of request starvation. To address this, Ascendra employs a performance model to predict requests at risk of missing their SLOs and proactively offloads them to high-priority instances. High-priority instances are optimized for low-latency execution and handle urgent requests nearing their deadlines. This partitioned architecture enables Ascendra to effectively balance high throughput and low latency. Extensive evaluation shows that Ascendra improves system throughput by up to 1.7x compared to vLLM and Sarathi-Serve while meeting both TTFT and TBT SLOs.
Problem

Research questions and friction points this paper is trying to address.

Dynamic prioritization of LLM requests to meet TTFT and TBT SLOs
Balancing throughput and latency via partitioned GPU resource allocation
Preventing request starvation by predicting and offloading urgent requests
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic prioritization based on request urgency
Partitioned GPU resources for low and high priority
Proactive offloading to meet SLOs efficiently
🔎 Similar Papers
No similar papers found.