MatKV: Trading Compute for Flash Storage in LLM Inference

📅 2025-12-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high energy consumption and latency of KV cache computation during the prefill phase in RAG-based LLM inference, this paper proposes an offline KV materialization and flash-level reuse framework. It is the first to precompute and persistently store document-level KV vectors on high-performance NVMe SSDs, enabling direct loading and reuse during inference. The framework supports pipelined GPU decoding and KV loading, decoupling computation from I/O, and maintains full compatibility with Hugging Face Transformers without requiring model architecture modifications. Experimental results demonstrate that the method reduces RAG inference latency and energy consumption by 50% while preserving question-answering accuracy within ±0.3%. This significantly improves utilization of resource-constrained GPUs and enhances system deployment flexibility.

Technology Category

Application Category

📝 Abstract
We observe two major trends in LLM-based generative AI: (1) inference is becoming the dominant factor in terms of cost and power consumption, surpassing training, and (2) retrieval augmented generation (RAG) is becoming prevalent. When processing long inputs in RAG, the prefill phase of computing the key-value vectors of input text is energy-intensive and time-consuming even with high-end GPUs. Thus, it is crucial to make the prefill phase in RAG inference efficient. To address this issue, we propose MatKV, a scheme that precomputes the key-value vectors (KVs) of RAG objects (e.g., documents), materializes them in inexpensive but fast and power-efficient flash storage, and reuses them at inference time instead of recomputing the KVs using costly and power-inefficient GPU. Experimental results using Hugging Face's Transformers library across state-of-the-art GPUs and flash memory SSDs confirm that, compared to full KV computation on GPUs, MatKV reduces both inference time and power consumption by half for RAG workloads, without severely impacting accuracy in the question-answering task. Furthermore, we demonstrate that MatKV enables additional optimizations in two ways. First, a GPU can decode text while simultaneously loading the materialized KVs for the next instance, reducing load latency. Second, since decoding speed is less sensitive to GPU performance than KV computation, low-end GPUs can be leveraged for decoding without significantly compromising speed once the materialized KVs are loaded into GPU memory. These findings underscore MatKV's potential to make large-scale generative AI applications more cost-effective, power-efficient, and accessible across a wider range of tasks and hardware environments.
Problem

Research questions and friction points this paper is trying to address.

Optimizes RAG prefill phase efficiency
Reduces GPU compute and energy consumption
Enables cost-effective AI on diverse hardware
Innovation

Methods, ideas, or system contributions that make the work stand out.

Precomputes key-value vectors for RAG objects
Stores vectors in flash storage for reuse
Reduces GPU computation by loading precomputed KVs
🔎 Similar Papers