Randomization Boosts KV Caching, Learning Balances Query Load: A Joint Perspective

📅 2026-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inherent tension between cache hit rate and load balancing in memory-constrained multi-LLM serving systems, where conventional LRU-based KV cache eviction struggles under dynamic workloads. The paper presents the first theoretical framework for jointly optimizing KV cache eviction and query routing, introducing a协同 algorithm that integrates theoretically grounded randomized KV cache eviction with online learning–driven adaptive query routing. The approach further supports prefix-sharing across multiple models to enhance cache efficiency. Extensive experiments across four benchmarks and three prefix-sharing configurations demonstrate substantial improvements: up to 6.92× higher cache hit rate, 11.96× lower latency, 14.06× reduction in time-to-first-token, and a 77.4% increase in throughput.

Technology Category

Application Category

📝 Abstract
KV caching is a fundamental technique for accelerating Large Language Model (LLM) inference by reusing key-value (KV) pairs from previous queries, but its effectiveness under limited memory is highly sensitive to the eviction policy. The default Least Recently Used (LRU) eviction algorithm struggles with dynamic online query arrivals, especially in multi-LLM serving scenarios, where balancing query load across workers and maximizing cache hit rate of each worker are inherently conflicting objectives. We give the first unified mathematical model that captures the core trade-offs between KV cache eviction and query routing. Our analysis reveals the theoretical limitations of existing methods and leads to principled algorithms that integrate provably competitive randomized KV cache eviction with learning-based methods to adaptively route queries with evolving patterns, thus balancing query load and cache hit rate. Our theoretical results are validated by extensive experiments across 4 benchmarks and 3 prefix-sharing settings, demonstrating improvements of up to 6.92$\times$ in cache hit rate, 11.96$\times$ reduction in latency, 14.06$\times$ reduction in time-to-first-token (TTFT), and 77.4% increase in throughput over the state-of-the-art methods. Our code is available at https://github.com/fzwark/KVRouting.
Problem

Research questions and friction points this paper is trying to address.

KV caching
query load balancing
eviction policy
multi-LLM serving
cache hit rate
Innovation

Methods, ideas, or system contributions that make the work stand out.

KV caching
randomized eviction
query routing
load balancing
LLM inference
🔎 Similar Papers
No similar papers found.