SemantiCache: Efficient KV Cache Compression via Semantic Chunking and Clustered Merging

📅 2026-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing KV cache compression methods rely on discrete tokens or non-semantic chunking, often leading to semantic fragmentation and irreversible information loss that degrades model performance. This work introduces, for the first time, the hierarchical semantic structure of language into KV cache compression by proposing a chunking strategy grounded in natural semantic boundaries. It further designs a Greedy Seed Clustering (GSC) algorithm to group tokens into coherent semantic clusters and integrates semantic kernel merging with proportional attention mechanisms to enhance core representations. The proposed approach achieves up to 2.61× decoding speedup across various models and benchmarks, substantially reduces memory consumption, and maintains generation quality comparable to that of the original uncompressed model.

Technology Category

Application Category

📝 Abstract
Existing KV cache compression methods generally operate on discrete tokens or non-semantic chunks. However, such approaches often lead to semantic fragmentation, where linguistically coherent units are disrupted, causing irreversible information loss and degradation in model performance. To address this, we introduce SemantiCache, a novel compression framework that preserves semantic integrity by aligning the compression process with the semantic hierarchical nature of language. Specifically, we first partition the cache into semantically coherent chunks by delimiters, which are natural semantic boundaries. Within each chunk, we introduce a computationally efficient Greedy Seed-Based Clustering (GSC) algorithm to group tokens into semantic clusters. These clusters are further merged into semantic cores, enhanced by a Proportional Attention mechanism that rebalances the reduced attention contributions of the merged tokens. Extensive experiments across diverse benchmarks and models demonstrate that SemantiCache accelerates the decoding stage of inference by up to 2.61 times and substantially reduces memory footprint, while maintaining performance comparable to the original model.
Problem

Research questions and friction points this paper is trying to address.

KV cache compression
semantic fragmentation
semantic coherence
language modeling
memory efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic Chunking
KV Cache Compression
Greedy Seed-Based Clustering
Proportional Attention
Semantic Coherence
🔎 Similar Papers
No similar papers found.
S
Shunlong Wu
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
Hai Lin
Hai Lin
Electrical Engineering, University of Notre Dame
Cyber-Physical SystemsHybrid Dynamical SystemsDistributed Cooperative Systems
S
Shaoshen Chen
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
T
Tingwei Lu
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
Y
Yongqin Zeng
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
Shaoxiong Zhan
Shaoxiong Zhan
Tsinghua University
Natural Language ProcessingLarge Language Model
H
Hai-Tao Zheng
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China; Peng Cheng Laboratory, Shenzhen, China
H
Hong-Gee Kim
Seoul National University, South Korea