🤖 AI Summary
Existing LLM inference schedulers for edge devices optimize solely for throughput, failing to meet diverse SLOs—including time-to-first-token (TTFT), tokens-per-occupancy-time (TPOT), and end-to-end latency—resulting in high SLO violation rates and hindering deployment of real-time applications (e.g., navigation, control). To address this, we propose SLICE, the first fine-grained, SLO-driven LLM inference scheduling framework tailored for edge environments. SLICE jointly optimizes request scheduling and dynamic generation rate control to enable real-time, adaptive scheduling under multi-dimensional SLO constraints. Its core innovation lies in the tight integration of utility-maximizing scheduling with an iterative, feedback-driven rate control mechanism. Extensive experiments demonstrate that SLICE achieves up to 35× higher SLO compliance rates and reduces task completion time by up to 3.4× compared to state-of-the-art baselines Orca and FastServe.
📝 Abstract
Large Language Models (LLMs), as the foundational architecture for next-generation interactive AI applications, not only power intelligent dialogue systems but also drive the evolution of embodied intelligence on edge devices, including humanoid robots, smart vehicles, and other scenarios. The applications running on these edge devices impose differentiated Service Level Objectives (SLO) requirements on LLM services, specifically manifested as distinct constraints on Time to First Token (TTFT) and Time Per Output Token (TPOT) as well as end-to-end latency. Notably, edge devices typically handle real-time tasks that are extremely sensitive to latency, such as machine control and navigation planning. However, existing scheduling service systems still prioritize maximizing output token throughput as the sole optimization objective, failing to adequately address the diversity of SLO requirements. This ultimately results in persistently high violation rates for end-to-end latency or TPOT related SLOs.
This paper proposes SLICE, an innovative scheduling solution designed for edge computing scenarios with differentiated SLO requirements. By combining a utility-maximizing request scheduling algorithm with a dynamic iterative control mechanism for generation rates, SLICE significantly improves LLM inference service SLO attainment. Experimental results demonstrate that compared to state-of-the-art solutions Orca and FastServe, SLICE achieves up to 35x higher SLO attainment and 3.4x advantage in task completion time than the other two solutions.