Retrospective Sparse Attention for Efficient Long-Context Generation

๐Ÿ“… 2025-08-12
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large language models (LLMs) face dual bottlenecks in long-context generation: linear growth of KV cache memory and sharply increasing decoding latency; existing compression methods overlook attention accumulation errors exacerbated by prolonged decoding. To address this, we propose a *retrospective KV cache update* mechanismโ€”the first to introduce *dynamic output correction*, wherein lightweight output caching and sparse recomputation enable context-aware retrospective refinement of historical query-key matches. Our method requires no architectural modification and effectively mitigates error propagation. Evaluated on multiple long-text generation benchmarks, it achieves a 1.6ร— improvement in effective KV coverage and up to a 21.9% gain in accuracy, striking a favorable balance between inference efficiency and generation quality.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language Models (LLMs) are increasingly deployed in long-context tasks such as reasoning, code generation, and multi-turn dialogue. However, inference over extended contexts is bottlenecked by the Key-Value (KV) cache, whose memory footprint grows linearly with sequence length and dominates latency at each decoding step. While recent KV cache compression methods identify and load important tokens, they focus predominantly on input contexts and fail to address the cumulative attention errors that arise during long decoding. In this paper, we introduce RetroAttention, a novel KV cache update technique that retrospectively revises past attention outputs using newly arrived KV entries from subsequent decoding steps. By maintaining a lightweight output cache, RetroAttention enables past queries to efficiently access more relevant context, while incurring minimal latency overhead. This breaks the fixed-attention-output paradigm and allows continual correction of prior approximations. Extensive experiments on long-generation benchmarks show that RetroAttention consistently outperforms state-of-the-art (SOTA) KV compression methods, increasing effective KV exposure by up to 1.6$ imes$ and accuracy by up to 21.9%.
Problem

Research questions and friction points this paper is trying to address.

KV cache bottleneck in long-context LLM inference
Cumulative attention errors during long decoding
Inefficient KV cache compression methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrospective KV cache update technique
Lightweight output cache for past queries
Continual correction of prior attention errors
๐Ÿ”Ž Similar Papers
No similar papers found.