🤖 AI Summary
To address the high latency and GPU memory bottlenecks caused by Transformer’s quadratic complexity in long-context reasoning, this paper proposes HiP (Hierarchical Pruning Attention), a training-free inference acceleration method. Methodologically, HiP leverages the inherent “attention locality” observed in pretrained models and introduces a tree-search-based top-k key selection algorithm, achieving O(T log T) time and O(T) space complexity. It further integrates KV cache offloading—retaining only O(log T) tokens in memory—and fine-tuning-free deployment. Evaluated on commodity GPUs, HiP supports million-token contexts, substantially reducing prefill and decode latency while cutting memory consumption significantly, with negligible degradation in generation quality. Its core contribution is the first theoretically grounded, training-free, and practically efficient solution for large-context inference—proven both analytically and empirically.
📝 Abstract
In modern large language models (LLMs), increasing the context length is crucial for improving comprehension and coherence in long-context, multi-modal, and retrieval-augmented language generation. While many recent transformer models attempt to extend their context length over a million tokens, they remain impractical due to the quadratic time and space complexities. Although recent works on linear and sparse attention mechanisms can achieve this goal, their real-world applicability is often limited by the need to re-train from scratch and significantly worse performance. In response, we propose a novel approach, Hierarchically Pruned Attention (HiP), which reduces the time complexity of the attention mechanism to $O(T log T)$ and the space complexity to $O(T)$, where $T$ is the sequence length. We notice a pattern in the attention scores of pretrained LLMs where tokens close together tend to have similar scores, which we call ``attention locality''. Based on this observation, we utilize a novel tree-search-like algorithm that estimates the top-$k$ key tokens for a given query on the fly, which is mathematically guaranteed to have better performance than random attention pruning. In addition to improving the time complexity of the attention mechanism, we further optimize GPU memory usage by implementing KV cache offloading, which stores only $O(log T)$ tokens on the GPU while maintaining similar decoding throughput. Experiments on benchmarks show that HiP, with its training-free nature, significantly reduces both prefill and decoding latencies, as well as memory usage, while maintaining high-quality generation with minimal degradation. HiP enables pretrained LLMs to scale up to millions of tokens on commodity GPUs, potentially unlocking long-context LLM applications previously deemed infeasible.