Intelligent Router for LLM Workloads: Improving Performance Through Workload-Aware Load Balancing

📅 2024-08-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In LLM inference, computational disparities between the prefill and decode phases cause load imbalance, resulting in low resource utilization and high end-to-end latency. To address this, we propose a workload-aware intelligent routing mechanism that jointly models response length prediction and mixed-load interference effects, enabling a fine-grained, reinforcement learning–based request scheduler. Our approach reveals that cluster-level load balancing—not just instance-level scheduling—dominates latency reduction, overcoming a key limitation of prior schedulers. Leveraging heuristic-guided training and a learnable prediction module, our method reduces end-to-end latency by 11.0% on public benchmarks and by 7.8% under real-world production traffic from Cloud Provider X. Furthermore, we establish a holistic latency benchmark co-optimized across model, hardware, and scheduler dimensions—setting a new standard for system-level LLM inference efficiency.

Technology Category

Application Category

📝 Abstract
Large Language Model (LLM) workloads have distinct prefill and decode phases with different compute and memory requirements which should ideally be accounted for when scheduling input queries across different LLM instances in a cluster. However existing scheduling algorithms treat LLM workloads as monolithic jobs without considering the distinct characteristics of the two phases in each workload. This leads to sub-optimal scheduling and increased response latency. In this work, we start by characterizing factors affecting the response latency during LLM inference serving. We establish that better load balancing of inference requests across the available LLM instances can improve the end-to-end latency to a larger extent than merely focusing on optimizing the instance-level scheduler. Motivated by our findings, we propose a heuristic-guided reinforcement learning-based intelligent router for data-driven and workload-aware scheduling. Our router schedules queries across LLM instances by leveraging a trainable response-length predictor, and a novel formulation for estimating the impact of mixing different workloads and achieves over 11% lower end-to-end latency than existing approaches on a mix of public datasets and 7.8% lower end-to-end latency on real workload data with diverse input and output trends from Cloud Provider X. Additionally, the proposed framework can also serve as a standard for benchmarking different LLM inference schedulers since it provides the best latency for a given model, hardware, and instance-level scheduler combination.
Problem

Research questions and friction points this paper is trying to address.

Task Allocation
Large Language Models
Resource Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Smart Router
Predictive Task Demand
Efficient Task Allocation
🔎 Similar Papers
No similar papers found.