DAST: Context-Aware Compression in LLMs via Dynamic Allocation of Soft Tokens

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing long-context processing methods for large language models (LLMs) suffer from high computational redundancy, while semantic compression approaches often ignore the dynamic variation of information density across contexts. Method: This paper proposes a context-aware soft token dynamic allocation mechanism that jointly leverages local perplexity and global attention weights to estimate block-level information richness, enabling adaptive allocation of learnable soft tokens; it further incorporates context importance-weighted aggregation to enhance preservation of critical semantics. Contribution/Results: Extensive experiments demonstrate that our method significantly outperforms state-of-the-art baselines across multiple long-context benchmarks—achieving superior trade-offs between inference efficiency and semantic fidelity—thereby breaking the conventional paradigm of uniform context compression.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) face computational inefficiencies and redundant processing when handling long context inputs, prompting a focus on compression techniques. While existing semantic vector-based compression methods achieve promising performance, these methods fail to account for the intrinsic information density variations between context chunks, instead allocating soft tokens uniformly across context chunks. This uniform distribution inevitably diminishes allocation to information-critical regions. To address this, we propose Dynamic Allocation of Soft Tokens (DAST), a simple yet effective method that leverages the LLM's intrinsic understanding of contextual relevance to guide compression. DAST combines perplexity-based local information with attention-driven global information to dynamically allocate soft tokens to the informative-rich chunks, enabling effective, context-aware compression. Experimental results across multiple benchmarks demonstrate that DAST surpasses state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Addresses computational inefficiencies in LLMs
Improves context-aware compression techniques
Dynamically allocates soft tokens effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic soft token allocation
Context-aware compression technique
Combines perplexity and attention
🔎 Similar Papers
No similar papers found.
S
Shaoshen Chen
Shenzhen International Graduate School, Tsinghua University
Y
Yangning Li
Shenzhen International Graduate School, Tsinghua University, Peng Cheng Laboratory
Zishan Xu
Zishan Xu
Tsinghua University
Y
Yinghui Li
Shenzhen International Graduate School, Tsinghua University
X
Xin Su
WeChat, Tencent
Zifei Shan
Zifei Shan
Applied Research at Tencent
machine learningnatural language processinglanguage modelsknowledge graphs
H
Hai-tao Zheng
Shenzhen International Graduate School, Tsinghua University, Peng Cheng Laboratory