🤖 AI Summary
This work addresses the memory and computational bottlenecks imposed by KV-Cache in long-context reasoning with large language models. The authors propose RoPE-Aligned Pruning, a method that jointly compresses the KV-Cache, attention parameters, and computational cost while preserving the structural integrity of Rotary Position Embedding (RoPE). Unlike conventional low-rank decomposition approaches that incur high-dimensional reconstruction overhead, this technique enables absorbable KV compression directly within the RoPE framework without restoring full dimensions. Experiments on LLaMA-3-8B and Mistral-7B demonstrate simultaneous reductions of 20–30% in KV-Cache size, parameter count, and FLOPs, with prefill and decoding latencies reduced to 83% and 77% of the baseline, respectively, all while maintaining high accuracy.
📝 Abstract
Long-context inference in large language models is increasingly bottlenecked by the memory and compute cost of the KV-Cache. Low-rank factorization compresses KV projections by writing $W \approx A * B$, where A produces latent KV states and B can be absorbed into downstream weights. In modern RoPE-based LLMs, this absorption fails: RoPE forces latent KV states to be reconstructed to full dimension, reintroducing substantial memory and compute overhead. We propose RoPE-Aligned Pruning (RAP), which prunes entire RoPE-aligned column pairs to preserve RoPE's 2x2 rotation structure, restore B absorption, and eliminate reconstruction. Our evaluation on LLaMA-3-8B and Mistral-7B shows that RAP enables joint reduction of KV-Cache, attention parameters, and FLOPs by 20-30%, all at once, while maintaining strong accuracy. Notably, RAP reduces attention latency to 83% (prefill) and 77% (decode) of baseline.