π€ AI Summary
To address the GPU memory bottleneck induced by KV caching in LLM inference and the scheduling challenges under dynamic request arrivals and memory constraints, this paper models request scheduling as a multi-stage online optimization problem. We introduce a novel fluid approximation to establish a theoretical performance benchmark and propose WAIT and Nested WAITβfirst algorithms achieving joint optimality across time-to-first-token (TTFT), end-to-end latency, and throughput. Our approach integrates KV-cache-aware real-time resource allocation, multi-threshold online scheduling, and rigorous theoretical analysis, proving asymptotic convergence to the fluid-optimal solution under heavy load. Evaluations on Llama-7B deployed on A100 GPUs demonstrate a 23% throughput increase and a 31% reduction in TTFT, significantly outperforming state-of-the-art systems vLLM and Sarathi.
π Abstract
Large Language Models (LLMs) are indispensable in today's applications, but their inference procedure -- generating responses by processing text in segments and using a memory-heavy Key-Value (KV) cache -- demands significant computational resources, particularly under memory constraints. This paper formulates LLM inference optimization as a multi-stage online scheduling problem where sequential prompt arrivals and KV cache growth render conventional scheduling ineffective. We develop a fluid dynamics approximation to provide a tractable benchmark that guides algorithm design. Building on this, we propose the Waiting for Accumulated Inference Threshold (WAIT) algorithm, which uses multiple thresholds to schedule incoming prompts optimally when output lengths are known, and extend it to Nested WAIT for cases with unknown output lengths. Theoretical analysis shows that both algorithms achieve near-optimal performance against the fluid benchmark in heavy traffic conditions, balancing throughput, latency, and Time to First Token (TTFT). Experiments with the Llama-7B model on an A100 GPU using both synthetic and real-world datasets demonstrate improved throughput and reduced latency relative to established baselines like vLLM and Sarathi. This work bridges operations research and machine learning, offering a rigorous framework for the efficient deployment of LLMs under memory constraints.