Accelerating LLM Inference via Dynamic KV Cache Placement in Heterogeneous Memory System

📅 2025-08-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In large language model (LLM) inference, frequent KV cache accesses severely strain bandwidth and capacity in heterogeneous memory systems (e.g., HBM + high-speed DRAM). This work formally models the dynamic KV cache placement problem under such memory hierarchies for the first time, derives a theoretical upper bound on achievable memory bandwidth utilization, and reveals substantial untapped optimization potential in existing systems. Leveraging attention sparsity patterns and hardware characteristics—including NVLink interconnects and LPDDR5X memory—we formulate a runtime-aware mathematical programming model for adaptive cache scheduling. Our key contributions are: (1) establishing the first theoretical benchmark to quantify the performance gap and improvement ceiling for KV caching; and (2) proposing a scalable, hardware-informed modeling framework that provides both theoretical foundations and practical guidance for efficient KV cache management in heterogeneous memory environments.

Technology Category

Application Category

📝 Abstract
Large Language Model (LLM) inference is increasingly constrained by memory bandwidth, with frequent access to the key-value (KV) cache dominating data movement. While attention sparsity reduces some memory traffic, the relevance of past tokens varies over time, requiring the full KV cache to remain accessible and sustaining pressure on both bandwidth and capacity. With advances in interconnects such as NVLink and LPDDR5X, modern AI hardware now integrates high-bandwidth memory (HBM) with high-speed off-package DRAM, making heterogeneous memory systems a practical solution. This work investigates dynamic KV cache placement across such systems to maximize aggregated bandwidth utilization under capacity constraints. Rather than proposing a specific scheduling policy, we formulate the placement problem mathematically and derive a theoretical upper bound, revealing substantial headroom for runtime optimization. To our knowledge, this is the first formal treatment of dynamic KV cache scheduling in heterogeneous memory systems for LLM inference.
Problem

Research questions and friction points this paper is trying to address.

Optimizing KV cache placement in heterogeneous memory for LLM inference
Maximizing bandwidth utilization under memory capacity constraints
Addressing memory bandwidth bottleneck in large language model inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic KV cache placement in heterogeneous memory
Mathematical formulation of placement problem with theoretical bound
Runtime optimization for aggregated bandwidth utilization
🔎 Similar Papers
No similar papers found.
Yunhua Fang
Yunhua Fang
Graduate Student, Rensselaer Polytechnic Institute
LLM inferencememory architecture
R
Rui Xie
Rensselaer Polytechnic Institute, Troy, NY 12180 USA
Asad Ul Haq
Asad Ul Haq
Graduate Student, RPI
Computer Systems Engineering
Linsen Ma
Linsen Ma
Rensselaer Polytechnic Institute
K
Kaoutar El Maghraoui
IBM T.J. Watson Research Center, Yorktown Heights, NY 10598 USA
Naigang Wang
Naigang Wang
IBM T. J. Watson Research Center (nwang@us.ibm.com)
Deep learningAI acceleratoron-chip power converteron-chip inductor/transformerMEMS transducers
M
Meng Wang
Rensselaer Polytechnic Institute, Troy, NY 12180 USA
L
Liu Liu
Rensselaer Polytechnic Institute, Troy, NY 12180 USA
T
Tong Zhang
Rensselaer Polytechnic Institute, Troy, NY 12180 USA