🤖 AI Summary
This work addresses the inefficiency of traditional large language model (LLM) inference scheduling, which relies on deterministic output length predictions and often suffers from head-of-line blocking due to its neglect of the inherent stochasticity in text generation. To enhance scheduling robustness, the study introduces a novel approach that models output lengths using a heavy-tailed distribution—specifically, the log-t distribution—and proposes a new metric, Tail Inflated Expectation (TIE), which integrates both expected latency and tail risk into shortest-job-first (SJF) scheduling. Empirical evaluations demonstrate significant improvements: online inference achieves a 2.31× reduction in per-token latency, while offline data generation throughput increases by 1.42×.
📝 Abstract
To schedule LLM inference, the \textit{shortest job first} (SJF) principle is favorable by prioritizing requests with short output lengths to avoid head-of-line (HOL) blocking. Existing methods usually predict a single output length for each request to facilitate scheduling. We argue that such a \textit{point estimate} does not match the \textit{stochastic} decoding process of LLM inference, where output length is \textit{uncertain} by nature and determined by when the end-of-sequence (EOS) token is sampled. Hence, the output length of each request should be fitted with a distribution rather than a single value. With an in-depth analysis of empirical data and the stochastic decoding process, we observe that output length follows a heavy-tailed distribution and can be fitted with the log-t distribution. On this basis, we propose a simple metric called Tail Inflated Expectation (TIE) to replace the output length in SJF scheduling, which adjusts the expectation of a log-t distribution with its tail probabilities to account for the risk that a request generates long outputs. To evaluate our TIE scheduler, we compare it with three strong baselines, and the results show that TIE reduces the per-token latency by $2.31\times$ for online inference and improves throughput by $1.42\times$ for offline data generation.