BEAVER: A Training-Free Hierarchical Prompt Compression Method via Structure-Aware Page Selection

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of high latency and low information utilization in long-context reasoning with large language models, where existing compression methods often suffer from prohibitive training costs or semantic fragmentation. The authors propose a training-free, hierarchical prompt compression framework that transforms linear token pruning into structured page-level scheduling through a structure-aware selection mechanism, balancing semantic coherence and hardware parallelism. The approach innovatively integrates dual-path pooling, a semantic-lexical dual-branch selector, and sentence smoothing, coupled with page-level tensor mapping for efficient compression. Evaluated on four long-context benchmarks, the method achieves state-of-the-art performance, significantly outperforming baselines on the RULER multi-needle retrieval task and reducing inference latency by 26.4× under 128k context length.

Technology Category

Application Category

📝 Abstract
The exponential expansion of context windows in LLMs has unlocked capabilities for long-document understanding but introduced severe bottlenecks in inference latency and information utilization. Existing compression methods often suffer from high training costs or semantic fragmentation due to aggressive token pruning. In this paper, we propose BEAVER, a novel training-free framework that shifts compression from linear token removal to structure-aware hierarchical selection. BEAVER maximizes hardware parallelism by mapping variable-length contexts into dense page-level tensors via dual-path pooling, and preserves discourse integrity through a hybrid planner combining semantic and lexical dual-branch selection with sentence smoothing. Extensive evaluations on four long-context benchmarks demonstrate that BEAVER achieves comparable performance to state-of-the-art (SOTA) methods like LongLLMLingua. Notably, on the RULER benchmark, BEAVER maintains high fidelity in multi-needle retrieval where baselines deteriorate. Regarding efficiency, BEAVER reduces latency by 26.4x on 128k contexts, offering a scalable solution for high-throughput applications. Our code is available at https://cslikai.cn/BEAVER/.
Problem

Research questions and friction points this paper is trying to address.

context compression
inference latency
long-context understanding
semantic fragmentation
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

training-free
hierarchical prompt compression
structure-aware page selection
dual-path pooling
discourse integrity
🔎 Similar Papers
No similar papers found.
Z
Zhengpei Hu
School of Computer Technology and Application, Qinghai University
K
Kai Li
Tsinghua University
D
Dapeng Fu
Ant Group Security and Intelligence Laboratory (SIL)
Chang Zeng
Chang Zeng
National Institute of Informatics
speech processingspeech/singing synthesisaudio/music generationspeaker recognition
Yue Li
Yue Li
Department of Computer Science and Technology, Nanjing University
Program AnalysisProgramming Languages and SystemsSoftware Engineering
Y
Yuanhao Tang
School of Computer Technology and Application, Qinghai University
Jianqiang Huang
Jianqiang Huang
Nanyang Technological University, Chinese Academy of Sciences
Compter VisionMachine LearningCasuality