LycheeCluster: Efficient Long-Context Inference with Structure-Aware Chunking and Hierarchical KV Indexing

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of large language models in long-context reasoning, which stem from the quadratic complexity of attention mechanisms and the high memory overhead of key-value (KV) caching. To mitigate these issues, the authors propose a boundary-aware chunking strategy that preserves local semantic coherence and introduce a recursive hierarchical index grounded in the triangle inequality. This index transforms KV cache retrieval from linear scanning into a logarithmic-time pruning process. Coupled with a lazy-update mechanism, the approach enables efficient streaming generation. Evaluated across multiple benchmarks, the method achieves up to 3.6× end-to-end inference speedup with negligible degradation in model performance, significantly outperforming existing techniques such as Quest and ClusterKV.

Technology Category

Application Category

📝 Abstract
The quadratic complexity of the attention mechanism and the substantial memory footprint of the Key-Value (KV) cache present severe computational and memory challenges for Large Language Models (LLMs) processing long contexts. Existing retrieval-based methods often compromise semantic integrity through fixed-size chunking and suffer from inefficient linear scanning. In this paper, we propose LycheeCluster, a novel method for efficient KV cache management. LycheeCluster preserves local semantic coherence via boundary-aware chunking and constructs a recursive hierarchical index rooted in the triangle inequality. This design transforms cache retrieval from a linear scan into a theoretically bounded, logarithmic-time pruning process, while a lazy update strategy supports efficient streaming generation. Experiments demonstrate that LycheeCluster achieves up to a 3.6x end-to-end inference speedup with negligible degradation in model performance, outperforming state-of-the-art KV cache management methods (e.g., Quest, ClusterKV). We will release our code and kernels after publication.
Problem

Research questions and friction points this paper is trying to address.

long-context inference
KV cache
attention mechanism
memory efficiency
semantic integrity
Innovation

Methods, ideas, or system contributions that make the work stand out.

structure-aware chunking
hierarchical KV indexing
long-context inference
attention optimization
lazy update
🔎 Similar Papers
No similar papers found.
Dongfang Li
Dongfang Li
Harbin Institute of Technology, Shenzhen
Natural Language ProcessingLarge Language Models
Z
Zixuan Liu
University of Electronic Science and Technology of China
G
Gang Lin
Harbin Institute of Technology
Baotian Hu
Baotian Hu
Harbin Institute of Technology (Shenzhen)
LLMMLLMNLP
M
Min Zhang
Harbin Institute of Technology