🤖 AI Summary
Large language models (LLMs) face dual bottlenecks in long-context modeling: quadratic attention complexity (O(n²)) and linear memory growth of key-value (KV) caches. Existing KV compression methods, designed post-hoc without training-time integration, incur substantial inference degradation and poor compatibility with post-training workflows. To address this, we propose Lagged Relative Sparse Attention (LRSA), the first method to embed the LagKV compression mechanism directly into model training—enabling parameter-free, gradient-differentiable, and low-overhead sparsification. LRSA introduces a lagged window Top-K selection strategy and block-wise pre-filling to balance efficiency and robustness. Experiments on question-answering fine-tuning show LRSA significantly outperforms baselines under compressed contexts, with near-zero performance loss, negligible training overhead, and seamless compatibility with both end-to-end training and post-training—marking the first KV compression approach fully integrated into standard LLM optimization pipelines.
📝 Abstract
Large Language Models (LLMs) have made significant strides in natural language processing and generation, yet their ability to handle long-context input remains constrained by the quadratic complexity of attention computation and linear-increasing key-value memory footprint. To reduce computational costs and memory, key-value cache compression techniques are commonly applied at inference time, but this often leads to severe performance degradation, as models are not trained to handle compressed context. Although there are more sophisticated compression methods, they are typically unsuitable for post-training because of their incompatibility with gradient-based optimization or high computation overhead. To fill this gap with no additional parameter and little computation overhead, we propose Lag-Relative Sparse Attention(LRSA) anchored by the LagKV compression method for long context post-training. Our method performs chunk-by-chunk prefilling, which selects the top K most relevant key-value pairs in a fixed-size lagging window, allowing the model to focus on salient historical context while maintaining efficiency. Experimental results show that our approach significantly enhances the robustness of the LLM with key-value compression and achieves better fine-tuned results in the question-answer tuning task.