🤖 AI Summary
In multi-tenant distributed LLM inference systems, severe performance skew arises among hundreds of heterogeneous LoRA adapters—sharing a common base model—due to rank heterogeneity, leading to GPU resource underutilization and SLO violations. This paper proposes the first workload-aware dynamic adapter scheduling framework, integrating fine-grained adapter placement, request routing, and GPU Direct RDMA-enabled cross-GPU remote memory access to jointly optimize compute and communication resources. Its core innovation lies in explicitly modeling LoRA rank heterogeneity as a hard scheduling constraint and dynamically adapting to real-time load distributions. Experiments under production workloads demonstrate up to 2× higher throughput, 9× lower first-token latency, and a 50% reduction in GPU usage—all while strictly satisfying SLOs.
📝 Abstract
Low-Rank Adaptation (LoRA) has become the de facto method for parameter-efficient fine-tuning of large language models (LLMs), enabling rapid adaptation to diverse domains. In production, LoRA-based models are served at scale, creating multi-tenant environments with hundreds of adapters sharing a base model. However, state-of-the-art serving systems co-batch heterogeneous adapters without accounting for rank (size) variability, leading to severe performance skew, which ultimately requires adding more GPUs to satisfy service-level objectives (SLOs). Existing optimizations, focused on loading, caching, and kernel execution, ignore this heterogeneity, leaving GPU resources underutilized. We present LoRAServe, a workload-aware dynamic adapter placement and routing framework designed to tame rank diversity in LoRA serving. By dynamically rebalancing adapters across GPUs and leveraging GPU Direct RDMA for remote access, LoRAServe maximizes throughput and minimizes tail latency under real-world workload drift. Evaluations on production traces from Company X show that LoRAServe elicits up to 2$ imes$ higher throughput, up to 9$ imes$ lower TTFT, while using up to 50% fewer GPUs under SLO constraints compared to state-of-the-art systems.