EPIC: Efficient Position-Independent Context Caching for Serving Large Language Models

📅 2024-10-20
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Low KV cache reuse in large language model (LLM) serving—particularly under dynamic scenarios such as few-shot learning and multi-document question answering, where conventional prefix-matching mechanisms fail due to strict positional dependencies—hampers inference efficiency. This paper proposes Position-Independent Context caching (PIC), a novel framework addressing this limitation. Its core contributions are: (1) AttnLink, a static sparse attention mechanism that compensates for attention deficits introduced by non-contiguous cache reuse, thereby decoupling cache reuse from token position; and (2) KVSplit, a semantics-preserving dynamic KV chunking method enabling accuracy-aware, modular KV reuse. Experiments demonstrate that PIC achieves near-lossless accuracy while reducing time-to-first-token (TTFT) by up to 8× and increasing throughput by up to 7×, significantly enhancing LLM inference efficiency and scalability.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are critical for a wide range of applications, but serving them efficiently becomes increasingly challenging as inputs become more complex. Context caching improves serving performance by exploiting inter-request dependency and reusing key-value (KV) cache across requests, thus improving time-to-first-token (TTFT). However, existing prefix-based context caching requires exact token prefix matches, limiting cache reuse in few-shot learning, multi-document QA, or retrieval-augmented generation, where prefixes may vary. In this paper, we present EPIC, an LLM serving system that introduces position-independent context caching (PIC), enabling modular KV cache reuse regardless of token chunk position (or prefix). EPIC features two key designs: AttnLink, which leverages static attention sparsity to minimize recomputation for accuracy recovery, and KVSplit, a customizable chunking method that preserves semantic coherence. Our experiments demonstrate that Epic delivers up to 8x improvements in TTFT and 7x throughput over existing systems, with negligible or no accuracy loss. By addressing the limitations of traditional caching approaches, Epic enables more scalable and efficient LLM inference.
Problem

Research questions and friction points this paper is trying to address.

Efficient serving of Large Language Models
Position-independent context caching challenge
Improving cache reuse in complex inputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Position-independent context caching
AttnLink reduces recomputation
KVSplit enhances semantic coherence
🔎 Similar Papers
No similar papers found.