IndexCache: Accelerating Sparse Attention via Cross-Layer Index Reuse

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the redundant computational overhead in sparse attention mechanisms, where each layer independently computes top-k indices despite high similarity between adjacent layers. To mitigate this inefficiency, the authors propose a cross-layer index reuse strategy that partitions the model into a small number of Full layers—responsible for computing indices—and a majority of Shared layers that reuse these indices. The approach combines a training-agnostic greedy search for layer configuration with a training-aware multi-layer attention distillation loss. Evaluated on a 30B DSA model, the method reduces index computation by 75% and achieves 1.82× and 1.48× speedups in prefill and decoding phases, respectively, with negligible accuracy degradation.

Technology Category

Application Category

📝 Abstract
Long-context agentic workflows have emerged as a defining use case for large language models, making attention efficiency critical for both inference speed and serving cost. Sparse attention addresses this challenge effectively, and DeepSeek Sparse Attention (DSA) is a representative production-grade solution: a lightweight lightning indexer selects the top-k most relevant tokens per query, reducing core attention from $O(L^2)$ to $O(Lk)$. However, the indexer itself retains $O(L^2)$ complexity and must run independently at every layer, despite the fact that the resulting top-k selections are highly similar across consecutive layers. We present IndexCache, which exploits this cross-layer redundancy by partitioning layers into a small set of Full layers that run their own indexers and a majority of Shared layers that simply reuse the nearest Full layer's top-k indices. We propose two complementary approaches to determine and optimize this configuration. Training-free IndexCache applies a greedy search algorithm that selects which layers to retain indexers by directly minimizing language modeling loss on a calibration set, requiring no weight updates. Training-aware IndexCache introduces a multi-layer distillation loss that trains each retained indexer against the averaged attention distributions of all layers it serves, enabling even simple interleaved patterns to match full-indexer accuracy. Experimental results on a 30B DSA model show that IndexCache can remove 75% of indexer computations with negligible quality degradation, achieving up to 1.82$\times$ prefill speedup and 1.48$\times$ decode speedup compared to standard DSA. These positive results are further confirmed by our preliminary experiments on the production-scale GLM-5 model (Figure 1).
Problem

Research questions and friction points this paper is trying to address.

sparse attention
indexer redundancy
cross-layer similarity
attention efficiency
long-context LLM
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse Attention
Index Reuse
Cross-Layer Redundancy
Training-Free Optimization
Multi-Layer Distillation
🔎 Similar Papers
No similar papers found.