InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU

📅 2025-02-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the three key challenges in long-context reasoning with large language models—excessive memory consumption, slow decoding speed, and poor generalization—this paper proposes a fine-tuning-free, efficient inference framework. Our method introduces: (1) a modular, hierarchical token pruning algorithm that performs semantic-aware, dynamic context compression; (2) a dynamically adaptive RoPE scaling strategy enabling robust extrapolation beyond pretraining sequence lengths; and (3) a KV cache offloading mechanism to CPU coupled with a zero-information-loss reconstruction scheme. Evaluated on a single L40s GPU (48 GB), our framework achieves, for the first time, 3M-token context inference—tripling prior capacity—while accelerating attention decoding by 18.95× at the 1M-token scale. It significantly surpasses standard pretraining length limits, establishing a practical, deployable paradigm for long-context LLM inference.

Technology Category

Application Category

📝 Abstract
In modern large language models (LLMs), handling very long context lengths presents significant challenges as it causes slower inference speeds and increased memory costs. Additionally, most existing pre-trained LLMs fail to generalize beyond their original training sequence lengths. To enable efficient and practical long-context utilization, we introduce InfiniteHiP, a novel, and practical LLM inference framework that accelerates processing by dynamically eliminating irrelevant context tokens through a modular hierarchical token pruning algorithm. Our method also allows generalization to longer sequences by selectively applying various RoPE adjustment methods according to the internal attention patterns within LLMs. Furthermore, we offload the key-value cache to host memory during inference, significantly reducing GPU memory pressure. As a result, InfiniteHiP enables the processing of up to 3 million tokens on a single L40s 48GB GPU -- 3x larger -- without any permanent loss of context information. Our framework achieves an 18.95x speedup in attention decoding for a 1 million token context without requiring additional training. We implement our method in the SGLang framework and demonstrate its effectiveness and practicality through extensive evaluations.
Problem

Research questions and friction points this paper is trying to address.

Extends LLM context to 3 million tokens
Reduces GPU memory pressure
Accelerates long-context processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic token pruning algorithm
RoPE adjustment methods
Key-value cache offloading
🔎 Similar Papers
No similar papers found.