🤖 AI Summary
In large language model (LLM) inference, frequent KV cache accesses severely strain bandwidth and capacity in heterogeneous memory systems (e.g., HBM + high-speed DRAM). This work formally models the dynamic KV cache placement problem under such memory hierarchies for the first time, derives a theoretical upper bound on achievable memory bandwidth utilization, and reveals substantial untapped optimization potential in existing systems. Leveraging attention sparsity patterns and hardware characteristics—including NVLink interconnects and LPDDR5X memory—we formulate a runtime-aware mathematical programming model for adaptive cache scheduling. Our key contributions are: (1) establishing the first theoretical benchmark to quantify the performance gap and improvement ceiling for KV caching; and (2) proposing a scalable, hardware-informed modeling framework that provides both theoretical foundations and practical guidance for efficient KV cache management in heterogeneous memory environments.
📝 Abstract
Large Language Model (LLM) inference is increasingly constrained by memory bandwidth, with frequent access to the key-value (KV) cache dominating data movement. While attention sparsity reduces some memory traffic, the relevance of past tokens varies over time, requiring the full KV cache to remain accessible and sustaining pressure on both bandwidth and capacity. With advances in interconnects such as NVLink and LPDDR5X, modern AI hardware now integrates high-bandwidth memory (HBM) with high-speed off-package DRAM, making heterogeneous memory systems a practical solution. This work investigates dynamic KV cache placement across such systems to maximize aggregated bandwidth utilization under capacity constraints. Rather than proposing a specific scheduling policy, we formulate the placement problem mathematically and derive a theoretical upper bound, revealing substantial headroom for runtime optimization. To our knowledge, this is the first formal treatment of dynamic KV cache scheduling in heterogeneous memory systems for LLM inference.