Kascade: A Practical Sparse Attention Method for Long-Context LLM Inference

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In long-context LLM inference—especially in RAG scenarios—attention computation constitutes the primary latency bottleneck. This paper proposes Kascade, a training-free sparse attention method. It introduces a novel cross-layer dynamic index reuse mechanism, automatically selecting an anchor layer via dynamic programming; supports head-aware Top-k key indexing and tile-level efficient implementation; and operates seamlessly across both prefilling and autoregressive decoding. Its FlashAttention-3–compatible kernel is deeply optimized for H100 GPUs. On LongBench and AIME-24, Kascade achieves accuracy nearly matching dense attention while accelerating decoding and prefilling by 4.1× and 2.2×, respectively. The core contribution is the first end-to-end deployable paradigm for high-accuracy, low-overhead inter-layer attention index reuse—eliminating training overhead and preserving full compatibility with standard inference pipelines.

Technology Category

Application Category

📝 Abstract
Attention is the dominant source of latency during long-context LLM inference, an increasingly popular workload with reasoning models and RAG. We propose Kascade, a training-free sparse attention method that leverages known observations such as 1) post-softmax attention is intrinsically sparse, and 2) the identity of high-weight keys is stable across nearby layers. Kascade computes exact Top-k indices in a small set of anchor layers, then reuses those indices in intermediate reuse layers. The anchor layers are selected algorithmically, via a dynamic-programming objective that maximizes cross-layer similarity over a development set, allowing easy deployment across models. The method incorporates efficient implementation constraints (e.g. tile-level operations), across both prefill and decode attention. The Top-k selection and reuse in Kascade is head-aware and we show in our experiments that this is critical for high accuracy. Kascade achieves up to 4.1x speedup in decode attention and 2.2x speedup in prefill attention over FlashAttention-3 baseline on H100 GPUs while closely matching dense attention accuracy on long-context benchmarks such as LongBench and AIME-24.
Problem

Research questions and friction points this paper is trying to address.

Accelerates long-context LLM inference by reducing attention latency
Proposes a training-free sparse attention method using Top-k reuse
Maintains accuracy while speeding up both prefill and decode attention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free sparse attention leveraging intrinsic sparsity
Dynamic programming selects anchor layers for index reuse
Head-aware Top-k selection and reuse for high accuracy
🔎 Similar Papers
No similar papers found.