MPCache: MPC-Friendly KV Cache Eviction for Efficient Private Large Language Model Inference

📅 2025-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high latency and communication overhead induced by long sequences in private large language model (LLM) inference under secure multi-party computation (MPC), this paper proposes the first MPC-friendly hybrid static-dynamic KV cache eviction framework: a static “one-look” eviction ensures cryptographic security, while query-aware dynamic selection enhances efficiency. We further introduce three key techniques: approximate similarity computation, hierarchical clustering-based compression, and cross-layer index sharing. The method preserves end-to-end privacy and security guarantees while significantly accelerating inference—reducing decoding latency by 1.8×–2.01× and communication volume by 3.39×–8.37× (scaling favorably with sequence length). Empirical evaluation demonstrates consistent superiority over baseline approaches across multiple downstream tasks.

Technology Category

Application Category

📝 Abstract
Private large language model (LLM) inference based on secure multi-party computation (MPC) offers cryptographically-secure protection for both user prompt and proprietary model weights. However, it suffers from large latency overhead especially for long input sequences. While key-value (KV) cache eviction algorithms have been proposed to reduce the computation and memory cost for plaintext inference, they are not designed for MPC and cannot benefit private inference easily. In this paper, we propose an accurate and MPC-friendly KV cache eviction framework, dubbed MPCache. MPCache is built on the observation that historical tokens in a long sequence may have different effects on the downstream decoding. Hence, MPCache combines a look-once static eviction algorithm to discard unimportant tokens and a query-aware dynamic selection algorithm to further select a small subset of tokens for attention computation. As existing dynamic selection algorithms incur too much latency, we propose a series of optimizations to drastically reduce the KV cache selection overhead, including MPC-friendly similarity approximation, hierarchical KV cache clustering, and cross-layer index sharing strategy. With extensive experiments, we demonstrate that MPCache consistently outperforms prior-art KV cache eviction baselines across different LLM generation tasks and achieves 1.8~2.01x and 3.39~8.37x decoding latency and communication reduction on different sequence lengths, respectively.
Problem

Research questions and friction points this paper is trying to address.

Long-text Processing
Privacy Protection
Model Security
Innovation

Methods, ideas, or system contributions that make the work stand out.

MPCache
Privacy Protection
Efficiency Optimization