Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the quadratic complexity bottleneck of attention mechanisms in large language models during long-context reasoning by proposing a lightweight, dynamic token-level sparsification method. The approach employs a dynamic, interleaved token selection strategy to compress queries, keys, and values into a reduced token set for attention computation at each layer and head, while restoring the full sequence in subsequent layers to reuse information. This design avoids irreversible early pruning and adapts to varying importance across layers and attention heads. Compatible with both dense and sparse attention kernels—including Flash Attention—the method achieves up to a 3.23× speedup in attention computation at 128K context length with less than 1% accuracy degradation, substantially improving the trade-off between accuracy and latency.

Technology Category

Application Category

📝 Abstract
The quadratic complexity of attention remains the central bottleneck in long-context inference for large language models. Prior acceleration methods either sparsify the attention map with structured patterns or permanently evict tokens at specific layers, which can retain irrelevant tokens or rely on irreversible early decisions despite the layer-/head-wise dynamics of token importance. In this paper, we propose Token Sparse Attention, a lightweight and dynamic token-level sparsification mechanism that compresses per-head $Q$, $K$, $V$ to a reduced token set during attention and then decompresses the output back to the original sequence, enabling token information to be reconsidered in subsequent layers. Furthermore, Token Sparse Attention exposes a new design point at the intersection of token selection and sparse attention. Our approach is fully compatible with dense attention implementations, including Flash Attention, and can be seamlessly composed with existing sparse attention kernels. Experimental results show that Token Sparse Attention consistently improves accuracy-latency trade-off, achieving up to $\times$3.23 attention speedup at 128K context with less than 1% accuracy degradation. These results demonstrate that dynamic and interleaved token-level sparsification is a complementary and effective strategy for scalable long-context inference.
Problem

Research questions and friction points this paper is trying to address.

long-context inference
attention complexity
token sparsification
quadratic complexity
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token Sparse Attention
dynamic token selection
sparse attention
long-context inference
attention sparsification
🔎 Similar Papers
No similar papers found.