HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the explosive KV cache memory growth in large language models (LLMs) during long-context inference, this paper proposes a head-granularity dynamic offloading mechanism. It partitions the KV cache per attention head across Transformer layers, retaining only a subset of heads on GPU while computing the KV states of remaining heads on-the-fly. CPU-GPU cooperative scheduling and dynamic output reconstruction ensure zero accuracy loss. Crucially, the method avoids storing the full KV cache of any layer on GPU, achieving the first head-level KV cache management scheme. Evaluated on Llama-3-8B, it reduces GPU KV memory from 128 GB to 1 GB and total GPU memory from 207 GB to 17 GB for million-token inference. Moreover, a single RTX 4090 GPU supports exact, approximation-free inference on 4-million-token sequences.

Technology Category

Application Category

📝 Abstract
Transformer-based large language models (LLMs) demonstrate impressive performance in long context generation. Extending the context length has disproportionately shifted the memory footprint of LLMs during inference to the key-value cache (KV cache). In this paper, we propose HEADINFER, which offloads the KV cache to CPU RAM while avoiding the need to fully store the KV cache for any transformer layer on the GPU. HEADINFER employs a fine-grained, head-wise offloading strategy, maintaining only selective attention heads KV cache on the GPU while computing attention output dynamically. Through roofline analysis, we demonstrate that HEADINFER maintains computational efficiency while significantly reducing memory footprint. We evaluate HEADINFER on the Llama-3-8B model with a 1-million-token sequence, reducing the GPU memory footprint of the KV cache from 128 GB to 1 GB and the total GPU memory usage from 207 GB to 17 GB, achieving a 92% reduction compared to BF16 baseline inference. Notably, HEADINFER enables 4-million-token inference with an 8B model on a single consumer GPU with 24GB memory (e.g., NVIDIA RTX 4090) without approximation methods.
Problem

Research questions and friction points this paper is trying to address.

Reduces GPU memory for LLM inference
Offloads KV cache to CPU RAM
Enables long context with consumer GPUs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Head-wise KV cache offloading
Dynamic attention output computation
Significant GPU memory reduction
🔎 Similar Papers
No similar papers found.