🤖 AI Summary
This work addresses the limitations of large language models in long-context reasoning, which stem from the quadratic complexity of attention mechanisms and the high memory overhead of key-value (KV) caching. To mitigate these issues, the authors propose a boundary-aware chunking strategy that preserves local semantic coherence and introduce a recursive hierarchical index grounded in the triangle inequality. This index transforms KV cache retrieval from linear scanning into a logarithmic-time pruning process. Coupled with a lazy-update mechanism, the approach enables efficient streaming generation. Evaluated across multiple benchmarks, the method achieves up to 3.6× end-to-end inference speedup with negligible degradation in model performance, significantly outperforming existing techniques such as Quest and ClusterKV.
📝 Abstract
The quadratic complexity of the attention mechanism and the substantial memory footprint of the Key-Value (KV) cache present severe computational and memory challenges for Large Language Models (LLMs) processing long contexts. Existing retrieval-based methods often compromise semantic integrity through fixed-size chunking and suffer from inefficient linear scanning. In this paper, we propose LycheeCluster, a novel method for efficient KV cache management. LycheeCluster preserves local semantic coherence via boundary-aware chunking and constructs a recursive hierarchical index rooted in the triangle inequality. This design transforms cache retrieval from a linear scan into a theoretically bounded, logarithmic-time pruning process, while a lazy update strategy supports efficient streaming generation. Experiments demonstrate that LycheeCluster achieves up to a 3.6x end-to-end inference speedup with negligible degradation in model performance, outperforming state-of-the-art KV cache management methods (e.g., Quest, ClusterKV). We will release our code and kernels after publication.