SLICE: SLO-Driven Scheduling for LLM Inference on Edge Computing Devices

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM inference schedulers for edge devices optimize solely for throughput, failing to meet diverse SLOs—including time-to-first-token (TTFT), tokens-per-occupancy-time (TPOT), and end-to-end latency—resulting in high SLO violation rates and hindering deployment of real-time applications (e.g., navigation, control). To address this, we propose SLICE, the first fine-grained, SLO-driven LLM inference scheduling framework tailored for edge environments. SLICE jointly optimizes request scheduling and dynamic generation rate control to enable real-time, adaptive scheduling under multi-dimensional SLO constraints. Its core innovation lies in the tight integration of utility-maximizing scheduling with an iterative, feedback-driven rate control mechanism. Extensive experiments demonstrate that SLICE achieves up to 35× higher SLO compliance rates and reduces task completion time by up to 3.4× compared to state-of-the-art baselines Orca and FastServe.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs), as the foundational architecture for next-generation interactive AI applications, not only power intelligent dialogue systems but also drive the evolution of embodied intelligence on edge devices, including humanoid robots, smart vehicles, and other scenarios. The applications running on these edge devices impose differentiated Service Level Objectives (SLO) requirements on LLM services, specifically manifested as distinct constraints on Time to First Token (TTFT) and Time Per Output Token (TPOT) as well as end-to-end latency. Notably, edge devices typically handle real-time tasks that are extremely sensitive to latency, such as machine control and navigation planning. However, existing scheduling service systems still prioritize maximizing output token throughput as the sole optimization objective, failing to adequately address the diversity of SLO requirements. This ultimately results in persistently high violation rates for end-to-end latency or TPOT related SLOs. This paper proposes SLICE, an innovative scheduling solution designed for edge computing scenarios with differentiated SLO requirements. By combining a utility-maximizing request scheduling algorithm with a dynamic iterative control mechanism for generation rates, SLICE significantly improves LLM inference service SLO attainment. Experimental results demonstrate that compared to state-of-the-art solutions Orca and FastServe, SLICE achieves up to 35x higher SLO attainment and 3.4x advantage in task completion time than the other two solutions.
Problem

Research questions and friction points this paper is trying to address.

Addressing diverse SLO requirements in edge LLM inference
Reducing high violation rates for latency and TPOT SLOs
Optimizing scheduling beyond maximizing token throughput
Innovation

Methods, ideas, or system contributions that make the work stand out.

SLO-driven scheduling for edge LLM inference
Utility-maximizing request scheduling algorithm
Dynamic iterative control for generation rates
🔎 Similar Papers
No similar papers found.
P
Pan Zhou
Southwest Minzu University
Y
Yiming Lei
Southwest Minzu University
L
Ling Liu
Southwest Minzu University
X
Xiaoqiong Xu
Southwest Minzu University
Ying Cai
Ying Cai
Associate Professor, Department of Computer Science, Iowa State University
data privacy and confidentialityquery authentication and correctionmobile object managmentmultimedia communications
D
Daji Ergu
Southwest Minzu University
Hongfang Yu
Hongfang Yu
UESTC
Network VirtualizationEdge/cloud ComputingMachine leaning Systems
Y
Yueyue Dai
Huazhong University of Science and Technology