🤖 AI Summary
This work addresses the memory bottleneck caused by KV caching in large language models during long-context and high-batch inference. The authors propose a self-indexing KV cache mechanism that simultaneously uses compressed key representations as both the storage structure and the index for sparse attention retrieval. This approach achieves, for the first time, an end-to-end unification of compression and attention selection without relying on external indices or learned predictors. By integrating 1-bit sign vector quantization, custom CUDA kernels, and FlashAttention, the method substantially reduces memory footprint while maintaining efficient inference. Experimental results demonstrate that the proposed technique significantly cuts KV cache memory usage with minimal runtime overhead, offering both practicality and scalability.
📝 Abstract
The KV cache in self-attention has emerged as a major bottleneck in long-context and large-batch inference for LLMs. Existing approaches often treat sparsity prediction and compression as separate modules, relying on auxiliary index structures to select relevant tokens, and on complex quantization schemes to reduce memory usage. This fragmented design introduces redundant overhead and limits scalability.
In this paper, we propose a novel paradigm: treating the compressed key representation not merely as storage, but as a self-indexing structure that directly enables efficient sparse attention. By designing a sign-based 1-bit vector quantization (VQ) scheme, our method unifies compression and retrieval in a single, hardware-friendly format. This approach eliminates the need for external indices or learning-based predictors, offering a lightweight yet robust solution for memory-constrained inference. All components are designed to be hardware-efficient and easy to implement. By implementing custom CUDA kernels, our method integrates seamlessly with FlashAttention, minimizing additional runtime and memory overhead. Experimental results demonstrate that our approach delivers both effectiveness and efficiency.