Training-Free Exponential Context Extension via Cascading KV Cache

📅 2024-06-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the quadratic computational overhead and critical information loss in long-context reasoning of large language models (LLMs), this paper proposes a training-free cascaded KV caching mechanism. Our method introduces a novel cascaded sub-cache architecture that enables relevance-aware token selection and dynamic sub-cache scheduling; it is the first to decouple prefill-phase optimization, thereby overcoming the latency bottlenecks of linear caching methods at real-world scale. Under fixed cache capacity, the effective context length expands exponentially. Experiments demonstrate substantial improvements: state-of-the-art accuracy on the 1M-token passkey retrieval task; 6.8× lower prefill latency than FlashAttention; and consistent superiority across streaming perplexity, question answering, and book summarization benchmarks.

Technology Category

Application Category

📝 Abstract
The transformer's context window is vital for tasks such as few-shot learning and conditional generation as it preserves previous tokens for active memory. However, as the context lengths increase, the computational costs grow quadratically, hindering the deployment of large language models (LLMs) in real-world, long sequence scenarios. Although some recent key-value caching (KV Cache) methods offer linear inference complexity, they naively manage the stored context, prematurely evicting tokens and losing valuable information. Moreover, they lack an optimized prefill/prompt stage strategy, resulting in higher latency than even quadratic attention for realistic context sizes. In response, we introduce a novel mechanism that leverages cascading sub-cache buffers to selectively retain the most relevant tokens, enabling the model to maintain longer context histories without increasing the cache size. Our approach outperforms linear caching baselines across key benchmarks, including streaming perplexity, question answering, book summarization, and passkey retrieval, where it retains better retrieval accuracy at 1M tokens after four doublings of the cache size of 65K. Additionally, our method reduces prefill stage latency by a factor of 6.8 when compared to flash attention on 1M tokens. These innovations not only enhance the computational efficiency of LLMs but also pave the way for their effective deployment in resource-constrained environments, enabling large-scale, real-time applications with significantly reduced latency.
Problem

Research questions and friction points this paper is trying to address.

Addresses quadratic computational cost growth with increasing context lengths.
Improves token retention in key-value caching for longer context histories.
Reduces prefill stage latency for large-scale, real-time applications.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cascading sub-cache buffers for token retention
Reduced prefill stage latency by 6.8x
Improved retrieval accuracy at 1M tokens
🔎 Similar Papers
No similar papers found.