🤖 AI Summary
This study addresses the substantial memory pressure and performance trade-offs in large language model (LLM) inference caused by KV cache growth with increasing context length and concurrent requests. The authors systematically evaluate three state-of-the-art KV cache management frameworks—vLLM, InfiniGen, and H2O—across diverse workloads, model scales, and sparsity conditions, measuring their latency, throughput, and memory efficiency. By integrating key techniques such as tensor offloading, token eviction heuristics, and speculative scheduling, the work provides the first comprehensive characterization of the operational boundaries of these strategies under varied deployment scenarios. It identifies optimal configurations under specific memory and performance constraints, offering empirical insights and practical guidance for designing efficient LLM inference systems.
📝 Abstract
Efficient inference with Large Language Models (LLMs) increasingly relies on Key-Value (KV) caches to store previously computed key and value vectors at each layer. These caches are essential to minimize redundant computation during autoregressive token generation, lowering computational complexity from quadratic to linear. However, the growth of KV caches has posed significant system-level challenges, particularly as model sizes increase, context lengths grow, and concurrent requests compete for limited memory resources. Even though several recent frameworks for KV cache management have emerged, their comparative trade-offs in memory consumption and inference performance have not been fully understood, especially under varying request sizes and model configurations. In this work, we conduct an empirical study of three state-of-the-art KV cache management frameworks: vLLM, InfiniGen, and H2O. These frameworks employ techniques such as tensor offloading, token eviction heuristics, and speculative scheduling to balance memory usage and performance. We evaluate their performance in terms of a range of metrics such as latency, throughput, and memory usage across a spectrum of key parameters including request rates, model sizes, and sparsity levels. Our results pinpoint the conditions for each framework to perform the best, revealing the most suitable selection and configuration of KV cache strategies under memory and performance constraints.