MIRAGE: KV Cache Optimization through Parameter Remapping for Multi-tenant LLM Serving

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-tenant LLM serving, dynamic KV cache updates induce high CPU memory bandwidth pressure and frequent GPU–CPU data transfers. To address this, we propose Parameter Remapping: leveraging the static nature of model parameters, it virtualizes their idle memory space at runtime and dynamically repurposes it for KV caching—enabling efficient reclamation of memory from inactive models in multi-tenant settings. The method exploits high-bandwidth interconnects (e.g., NVIDIA Grace Hopper) and lightweight memory virtualization to minimize cache management overhead. Experiments show that, compared to vLLM, our approach reduces tail-token latency by 44.8%–82.5%, first-token latency by 20.7%–99.3%, and improves throughput by 6.6%–86.7%.

Technology Category

Application Category

📝 Abstract
KV cache accelerates LLM inference by avoiding redundant computation, at the expense of memory. To support larger KV caches, prior work extends GPU memory with CPU memory via CPU-offloading. This involves swapping KV cache between GPU and CPU memory. However, because the cache updates dynamically, such swapping incurs high CPU memory traffic. We make a key observation that model parameters remain constant during runtime, unlike the dynamically updated KV cache. Building on this, we introduce MIRAGE, which avoids KV cache swapping by remapping, and thereby repurposing, the memory allocated to model parameters for KV cache. This parameter remapping is especially beneficial in multi-tenant environments, where the memory used for the parameters of the inactive models can be more aggressively reclaimed. Exploiting the high CPU-GPU bandwidth offered by the modern hardware, such as the NVIDIA Grace Hopper Superchip, we show that MIRAGE significantly outperforms state-of-the-art solutions, achieving a reduction of 44.8%-82.5% in tail time-between-token latency, 20.7%-99.3% in tail time-to-first-token latency, and 6.6%-86.7% higher throughput compared to vLLM.
Problem

Research questions and friction points this paper is trying to address.

Optimizes KV cache memory usage in multi-tenant LLM serving
Reduces CPU-GPU swapping overhead via parameter remapping
Improves latency and throughput in dynamic inference environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Remaps model parameter memory for KV cache
Avoids KV cache swapping in multi-tenant environments
Leverages high CPU-GPU bandwidth for performance
🔎 Similar Papers
No similar papers found.
R
Ruihao Li
The University of Texas at Austin
S
Shagnik Pal
The University of Texas at Austin
V
Vineeth Narayan Pullu
The University of Texas at Austin
P
Prasoon Sinha
The University of Texas at Austin
Jeeho Ryoo
Jeeho Ryoo
Fairleigh Dickinson University
L
Lizy K. John
The University of Texas at Austin
Neeraja J. Yadwadkar
Neeraja J. Yadwadkar
Assistant Professor, University of Texas at Austin
Networked SystemsCloud ComputingMachine Learning