Efficient Remote Prefix Fetching with GPU-native Media ASICs

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant inference latency incurred by remote key-value (KV) cache reuse under bandwidth-constrained conditions, where conventional compression and decompression introduce substantial overhead. To overcome this challenge, the authors propose KVFetcher, a system that leverages native GPU video codecs for the first time to efficiently compress, transmit, and reconstruct KV caches. By co-designing a codec-friendly tensor layout and a pipelined scheduling mechanism, KVFetcher achieves high compression ratios and low decompression overhead while preserving lossless accuracy. Experimental results across multiple GPU platforms demonstrate that KVFetcher reduces time-to-first-token (TTFT) latency by up to 3.51× compared to existing approaches, offering a substantial improvement in end-to-end inference efficiency.

Technology Category

Application Category

📝 Abstract
Remote KV cache reuse fetches KV cache for identical contexts from remote storage, avoiding recomputation, accelerating LLM inference. While it excels in high-speed networks, its performance degrades significantly in bandwidth-limited scenarios. Recent studies address this by transmitting KV caches in compressed form, but the associated heavyweight decompression counteracts the KV reuse benefits. In this paper, we propose an efficient and widely deployable remote KV cache reuse solution that leverages GPU-native video codecs. Our system, KVFetcher, enables effective KV cache coding with two techniques. The codec-friendly tensor layout compresses the KV cache in a highly compact video format, enabling fast transmission. The efficient KV fetcher orchestrates the transmission, decoding, and restoration of compressed KV caches in an efficient pipelined manner, eliminating resource contention, masking network fluctuations, and achieving minimum time-to-first-token (TTFT). We prototype KVFetcher on diverse GPUs from high- to low-end. Experiments reveal that it reduces TTFT by up to 3.51 times while maintaining lossless accuracy, compared to SOTA methods.
Problem

Research questions and friction points this paper is trying to address.

remote KV cache reuse
bandwidth-limited networks
LLM inference acceleration
compression-decompression overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

KV cache reuse
GPU-native video codecs
tensor layout optimization
pipelined KV fetching
lossless compression
🔎 Similar Papers
No similar papers found.