🤖 AI Summary
This work addresses the challenge of jointly optimizing KV cache hit efficiency and load balancing in large language model inference, a task that existing schedulers tackle with complex policies and extensive hyperparameter tuning. The authors propose a minimalist scheduling scoring mechanism that directly multiplies a KV cache-aware metric—the number of new prefill tokens—with a load-balancing metric—the current batch size—thereby naturally integrating both objectives without any hyperparameters. This approach reveals, for the first time, that a multiplicative combination can eliminate reliance on hyperparameter tuning while maintaining algorithmic simplicity and outperforming state-of-the-art schedulers. Experimental results demonstrate that, across realistic chat, API, and code-agent workloads, the method reduces time-to-first-token (TTFT) by 92% and 52%, and time-per-output-token (TPOT) by 21% and 20%, compared to vLLM-v1 and production-grade schedulers, respectively.
📝 Abstract
High-quality LLM request scheduling requires achieving two key objectives: whether the routed instance has KV$ to accelerate the request execution and whether the workload is balanced across instances. Achieving both objectives is challenging because pursuing one objective may compromise the other. Current approaches adopt various combinators (e.g., linear combinations) to compute a scheduling score combining indicators for the two objectives, which are complex in that they either require significant workload-specific hyperparameter tuning or model-hardware-aware simulator development, and could still lead to suboptimal performance. In this paper, we show that using a simple multiplication of two carefully chosen indicators-one for KV$-aware (new prefill tokens if routed to an instance) and one for load balancing-aware (current batch size of the instance)-as the scheduling score can simultaneously achieve both objectives well without any hyperparameter tuning. The key idea is that the multiplied score considers both objectives in a manner similar to a linear combination, with a nice property that the original hyperparameters are canceled out during comparison so we don't need tuning to find the best parameters. The two indicators are chosen based on our analysis of LLM characteristics, and our extensive experiments show that this simple approach can reduce TTFT by 92% and 52%, and TPOT by 21% and 20%, compared to vLLM-v1 and a production scheduler on real-world workloads covering chatbots, API calls, and coding agents. We also mathematically derive the conditions under which multiplication may fail, and find that such conditions are extremely rare in practice and can be detected (and mitigated) beforehand.