L4: Low-Latency and Load-Balanced LLM Serving via Length-Aware Scheduling

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low GPU utilization and high latency in serving large language models (LLMs) with ultra-long contexts (>128K tokens), caused by request-length heterogeneity, this paper proposes a dynamic, cross-instance length-aware scheduling framework. Methodologically, it introduces: (1) length-specialized instance grouping coupled with a cascaded pipeline architecture; (2) quality-of-experience (QoE)-optimal stage partitioning driven by dynamic programming; and (3) runtime adaptive refinement of length intervals, alongside decentralized intra- and inter-group load rebalancing. Evaluated against state-of-the-art multi-instance schedulers, the framework achieves up to 67% reduction in end-to-end latency, up to 69% reduction in tail latency, and a 2.89× throughput improvement. These gains significantly alleviate the attention backend’s sensitivity to input-length heterogeneity—a key bottleneck in ultra-long-context LLM serving.

Technology Category

Application Category

📝 Abstract
Efficiently harnessing GPU compute is critical to improving user experience and reducing operational costs in large language model (LLM) services. However, current inference engine schedulers overlook the attention backend's sensitivity to request-length heterogeneity within a batch. As state-of-the-art models now support context windows exceeding 128K tokens, this once-tolerable inefficiency has escalated into a primary system bottleneck, causing severe performance degradation through GPU underutilization and increased latency. We present L4, a runtime system that dynamically reschedules requests across multiple instances serving the same LLM to mitigate per-instance length heterogeneity. L4 partitions these instances into length-specialized groups, each handling requests within a designated length range, naturally forming a pipeline as requests flow through them. L4 devises a dynamic programming algorithm to efficiently find the stage partition with the best QoE, employs runtime range refinement together with decentralized load (re)balance both across and within groups, achieving a balanced and efficient multi-instance service. Our evaluation shows that, under the same configuration, L4 reduces end-to-end latency by up to 67% and tail latency by up to 69%, while improving overall system throughput by up to 2.89 times compared to the state-of-the-art multi-instance scheduling systems.
Problem

Research questions and friction points this paper is trying to address.

Optimizes LLM serving by addressing request-length heterogeneity in batches
Reduces GPU underutilization and latency via dynamic request rescheduling
Enhances throughput and tail latency through length-specialized instance grouping
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic request rescheduling across multiple LLM instances
Length-specialized instance groups forming a processing pipeline
Dynamic programming algorithm optimizing quality of experience
🔎 Similar Papers
No similar papers found.