🤖 AI Summary
To address the excessive KV cache memory overhead in long-context reasoning with large language models—often exceeding GPU memory capacity—this paper proposes HeteroAttention, a heterogeneous attention computation framework. HeteroAttention is the first to jointly integrate key quantization, value offloading to CPU, and dynamic KV eviction, enabling GPU-CPU collaborative approximate attention computation without model fine-tuning and offering plug-and-play compatibility with standard Transformer architectures. On the LongBench benchmark, it achieves state-of-the-art performance while retaining only 12.5% of the original KV cache; at 25% retention, it preserves full-attention accuracy. Notably, it enables Llama-3-8B to process 4M-token contexts on a single A100 (80 GB), a first. Crucially, it overcomes the severe accuracy collapse observed in existing compression methods beyond 85% KV reduction.
📝 Abstract
Processing long-context inputs with large language models presents a significant challenge due to the enormous memory requirements of the Key-Value (KV) cache during inference. Existing KV cache compression methods exhibit noticeable performance degradation when memory is reduced by more than 85%. Additionally, strategies that leverage GPU-CPU collaboration for approximate attention remain underexplored in this setting. We propose HCAttention, a heterogeneous attention computation framework that integrates key quantization, value offloading, and dynamic KV eviction to enable efficient inference under extreme memory constraints. The method is compatible with existing transformer architectures and does not require model fine-tuning. Experimental results on the LongBench benchmark demonstrate that our approach preserves the accuracy of full-attention model while shrinking the KV cache memory footprint to 25% of its original size. Remarkably, it stays competitive with only 12.5% of the cache, setting a new state-of-the-art in LLM KV cache compression. To the best of our knowledge, HCAttention is the first to extend the Llama-3-8B model to process 4 million tokens on a single A100 GPU with 80GB memory.