Understanding and Improving Information Preservation in Prompt Compression for LLMs

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the substantial computational overhead, performance degradation, and bias induced by long prompts in large language models (LLMs) for information-intensive tasks, this paper proposes the first decoupled evaluation framework, systematically assessing prompt compression methods along three dimensions: downstream task performance, contextual groundedness, and fine-grained information retention. We empirically reveal fundamental deficiencies of existing soft and hard compression techniques in preserving critical information. To overcome these limitations, we design a differentiable, fine-grained controllable soft prompt compression mechanism that jointly integrates BERTScore-based semantic alignment evaluation and entity-level fidelity quantification. Extensive experiments demonstrate that our approach improves downstream task performance by 23%, increases BERTScore groundedness by over 8 points, and boosts entity retention rate by 2.7× compared to state-of-the-art baselines.

Technology Category

Application Category

📝 Abstract
Recent advancements in large language models (LLMs) have enabled their successful application to a broad range of tasks. However, in information-intensive tasks, the prompt length can grow fast, leading to increased computational requirements, performance degradation, and induced biases from irrelevant or redundant information. Recently, various prompt compression techniques have been introduced to optimize the trade-off between reducing input length and retaining performance. We propose a holistic evaluation framework that allows for in-depth analysis of prompt compression methods. We focus on three key aspects, besides compression ratio: (i) downstream task performance, (ii) grounding in the input context, and (iii) information preservation. Through this framework, we investigate state-of-the-art soft and hard compression methods, showing that they struggle to preserve key details from the original prompt, limiting their performance on complex tasks. We demonstrate that modifying soft prompting methods to control better the granularity of the compressed information can significantly improve their effectiveness -- up to +23% in downstream task performance, more than +8 BERTScore points in grounding, and 2.7x more entities preserved in compression.
Problem

Research questions and friction points this paper is trying to address.

Optimizing prompt compression to reduce computational requirements
Preserving key information in compressed prompts for complex tasks
Improving grounding and entity retention in prompt compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Holistic evaluation framework for prompt compression
Modified soft prompting controls information granularity
Improved grounding and entity preservation in compression
🔎 Similar Papers