Self-Indexing KVCache: Predicting Sparse Attention from Compressed Keys

📅 2026-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the memory bottleneck caused by KV caching in large language models during long-context and high-batch inference. The authors propose a self-indexing KV cache mechanism that simultaneously uses compressed key representations as both the storage structure and the index for sparse attention retrieval. This approach achieves, for the first time, an end-to-end unification of compression and attention selection without relying on external indices or learned predictors. By integrating 1-bit sign vector quantization, custom CUDA kernels, and FlashAttention, the method substantially reduces memory footprint while maintaining efficient inference. Experimental results demonstrate that the proposed technique significantly cuts KV cache memory usage with minimal runtime overhead, offering both practicality and scalability.

Technology Category

Application Category

📝 Abstract
The KV cache in self-attention has emerged as a major bottleneck in long-context and large-batch inference for LLMs. Existing approaches often treat sparsity prediction and compression as separate modules, relying on auxiliary index structures to select relevant tokens, and on complex quantization schemes to reduce memory usage. This fragmented design introduces redundant overhead and limits scalability. In this paper, we propose a novel paradigm: treating the compressed key representation not merely as storage, but as a self-indexing structure that directly enables efficient sparse attention. By designing a sign-based 1-bit vector quantization (VQ) scheme, our method unifies compression and retrieval in a single, hardware-friendly format. This approach eliminates the need for external indices or learning-based predictors, offering a lightweight yet robust solution for memory-constrained inference. All components are designed to be hardware-efficient and easy to implement. By implementing custom CUDA kernels, our method integrates seamlessly with FlashAttention, minimizing additional runtime and memory overhead. Experimental results demonstrate that our approach delivers both effectiveness and efficiency.
Problem

Research questions and friction points this paper is trying to address.

KV cache
sparse attention
long-context inference
memory bottleneck
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Indexing KVCache
Sparse Attention
1-bit Vector Quantization
Hardware-Efficient Inference
FlashAttention Integration
🔎 Similar Papers
No similar papers found.
X
Xu Yang
College of Computer Science and Electronic Engineering, Hunan University, Changsha, China
J
Jiapeng Zhang
College of Computer Science and Electronic Engineering, Hunan University, Changsha, China; The Ministry of Education Key Laboratory of “Fusion Computing of Supercomputing and Artificial Intelligence”, China
Dongyang Zhao
Dongyang Zhao
Fudan University
Computer Vision
Guo Chen
Guo Chen
Computer Science and Technology, Tsinghua University
Speech SeparationArtificial Intelligence
Zhuo Tang
Zhuo Tang
Central South University