🤖 AI Summary
Existing KV cache compression methods rely on discrete tokens or non-semantic chunking, often leading to semantic fragmentation and irreversible information loss that degrades model performance. This work introduces, for the first time, the hierarchical semantic structure of language into KV cache compression by proposing a chunking strategy grounded in natural semantic boundaries. It further designs a Greedy Seed Clustering (GSC) algorithm to group tokens into coherent semantic clusters and integrates semantic kernel merging with proportional attention mechanisms to enhance core representations. The proposed approach achieves up to 2.61× decoding speedup across various models and benchmarks, substantially reduces memory consumption, and maintains generation quality comparable to that of the original uncompressed model.
📝 Abstract
Existing KV cache compression methods generally operate on discrete tokens or non-semantic chunks. However, such approaches often lead to semantic fragmentation, where linguistically coherent units are disrupted, causing irreversible information loss and degradation in model performance. To address this, we introduce SemantiCache, a novel compression framework that preserves semantic integrity by aligning the compression process with the semantic hierarchical nature of language. Specifically, we first partition the cache into semantically coherent chunks by delimiters, which are natural semantic boundaries. Within each chunk, we introduce a computationally efficient Greedy Seed-Based Clustering (GSC) algorithm to group tokens into semantic clusters. These clusters are further merged into semantic cores, enhanced by a Proportional Attention mechanism that rebalances the reduced attention contributions of the merged tokens. Extensive experiments across diverse benchmarks and models demonstrate that SemantiCache accelerates the decoding stage of inference by up to 2.61 times and substantially reduces memory footprint, while maintaining performance comparable to the original model.