$A^3$: Attention-Aware Accurate KV Cache Fusion for Fast Large Language Model Serving

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high decoding latency, substantial memory overhead, and performance degradation in existing KV cache reuse methods for long-context inference with large language models (LLMs), this paper proposes Attention-aware KV Fusion (A-KVF). A-KVF performs fine-grained attention analysis to assess the relevance between text chunks and the query, precomputes KV states for critical context segments, and selectively fuses highly relevant fragments—thereby avoiding context misalignment inherent in conventional recomputation. Its key innovation lies in integrating attention mechanisms directly into cache reuse decisions, enabling precise, low-overhead KV compression and reuse. Experiments across multiple benchmarks and LLMs demonstrate that A-KVF reduces first-token latency by up to 2× while achieving higher task accuracy than four state-of-the-art baselines. The method thus significantly improves both efficiency and accuracy in long-context LLM serving.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated strong capabilities in processing long contexts, enabling them to tackle tasks involving long textual inputs such as multi-turn conversations, legal documents, or retrieved documents in Retrieval-Augmented Generation (RAG) systems. However, despite their ability to handle long sequences, the resulting decoding latency and memory overhead remain substantial, posing challenges for real-world deployment. Recent advances in KV Cache reuse have shown potential to mitigate these costs, but still suffer from notable performance degradation. To address this issue, we conduct an in-depth investigation of recomputation-based reuse methods and observe that the recomputed tokens often fail to align with the context segments most relevant to the question. This misalignment hinders proper updates to the critical contextual representations. Therefore, we propose the $ extbf{A}$ttention-$ extbf{A}$ware $ extbf{A}$ccurate KV Cache Fusion algorithm ($A^3$), which precomputes and selectively fuses the KV Cache of text chunks based on their relevance to the question, achieving accurate integration with minimal computational overhead. Extensive experiments on various benchmarks and LLMs demonstrate that $A^3$ achieves the best task performance compared to four baselines while reducing the time-to-first-token (TTFT) by 2$ imes$.
Problem

Research questions and friction points this paper is trying to address.

Reducing KV Cache memory overhead and decoding latency in LLMs
Addressing performance degradation in KV Cache reuse methods
Aligning recomputed tokens with question-relevant context segments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attention-aware KV cache fusion algorithm
Precomputes KV cache for relevant text chunks
Selectively fuses cache with minimal overhead
🔎 Similar Papers
No similar papers found.