OBCache: Optimal Brain KV Cache Pruning for Efficient Long-Context LLM Inference

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In long-context reasoning, large language models (LLMs) incur KV cache memory overhead that scales linearly with both sequence length and batch size. Existing heuristic cache eviction methods—based solely on attention weights—fail to capture the actual impact of individual tokens on the final attention output. Method: We propose OBCache, the first framework to formulate KV cache eviction as a layer-wise structured pruning problem. Grounded in Optimal Brain Damage (OBD) theory, OBCache derives closed-form importance scores for keys, values, and key-value pairs, explicitly quantifying each token’s perturbation to the attention output. These scores jointly incorporate attention weights, value-state magnitudes, and output sensitivity—enabling output-aware, fine-grained pruning. Results: Extensive experiments on LLaMA and Qwen demonstrate that replacing conventional heuristics with OBCache significantly improves inference accuracy on long-context tasks while maintaining computational efficiency.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) with extended context windows enable powerful downstream applications but impose significant memory overhead, as caching all key-value (KV) states scales linearly with sequence length and batch size. Existing cache eviction methods address this by exploiting attention sparsity, yet they typically rank tokens heuristically using accumulated attention weights without considering their true impact on attention outputs. We propose Optimal Brain Cache (OBCache), a principled framework that formulates cache eviction as a layer-wise structured pruning problem. Building upon the Optimal Brain Damage (OBD) theory, OBCache quantifies token saliency by measuring the perturbation in attention outputs induced by pruning tokens, with closed-form scores derived for isolated keys, isolated values, and joint key-value pairs. Our scores account not only for attention weights but also for information from value states and attention outputs, thereby enhancing existing eviction strategies with output-aware signals. Experiments on LLaMA and Qwen models demonstrate that replacing the heuristic scores in existing works, which estimate token saliency across different query positions, with OBCache's output-aware scores consistently improves long-context accuracy.
Problem

Research questions and friction points this paper is trying to address.

Reduces KV cache memory overhead in long-context LLMs
Improves cache eviction by measuring attention output perturbation
Enhances token saliency scoring beyond heuristic attention weights
Innovation

Methods, ideas, or system contributions that make the work stand out.

Formulates cache eviction as structured pruning
Quantifies token saliency using output perturbation
Enhances eviction with output-aware scoring signals
🔎 Similar Papers
No similar papers found.