KVFlow: Efficient Prefix Caching for Accelerating LLM-Based Multi-Agent Workflows

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-agent LLM workflows, inefficient KV cache management—particularly the inability of conventional LRU policies to anticipate agent execution sequences—leads to premature eviction and frequent cache misses. To address this, we propose KVFlow, a novel KV cache orchestration framework. Its core contributions are: (1) an Agent Step Graph that explicitly models execution dependencies and temporal ordering among agent steps, enabling step-level prediction of KV node eviction; and (2) a fully overlapping CPU-GPU asynchronous prefetching mechanism that leverages tree-structured prefix sharing and execution-value estimation to maximize cache reuse. Experiments demonstrate that KVFlow achieves up to 1.83× speedup over SGLang’s hierarchical radix cache in single-workflow settings, and up to 2.19× speedup under concurrent multi-workflow workloads. These gains stem from significantly reduced redundant computation and memory-swapping overhead.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) based agentic workflows have become a popular paradigm for coordinating multiple specialized agents to solve complex tasks. To improve serving efficiency, existing LLM systems employ prefix caching to reuse key-value (KV) tensors corresponding to agents' fixed prompts, thereby avoiding redundant computation across repeated invocations. However, current systems typically evict KV caches using a Least Recently Used (LRU) policy, which fails to anticipate future agent usage and often discards KV caches shortly before their reuse. This leads to frequent cache misses and substantial recomputation or swapping overhead. We present KVFlow, a workflow-aware KV cache management framework tailored for agentic workloads. KVFlow abstracts the agent execution schedule as an Agent Step Graph and assigns each agent a steps-to-execution value that estimates its temporal proximity to future activation. These values guide a fine-grained eviction policy at the KV node level, allowing KVFlow to preserve entries likely to be reused and efficiently manage shared prefixes in tree-structured caches. Moreover, KVFlow introduces a fully overlapped KV prefetching mechanism, which proactively loads required tensors from CPU to GPU in background threads for agents scheduled in the next step, thereby avoiding cache miss stalls during generation. Compared to SGLang with hierarchical radix cache, KVFlow achieves up to 1.83$ imes$ speedup for single workflows with large prompts, and up to 2.19$ imes$ speedup for scenarios with many concurrent workflows.
Problem

Research questions and friction points this paper is trying to address.

Optimizes KV cache management for multi-agent LLM workflows
Reduces cache misses by predicting agent reuse patterns
Improves efficiency via overlapped prefetching of shared KV tensors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Workflow-aware KV cache management framework
Agent Step Graph for execution scheduling
Fully overlapped KV prefetching mechanism