From Reading to Compressing: Exploring the Multi-document Reader for Prompt Compression

📅 2024-10-05
🏛️ Conference on Empirical Methods in Natural Language Processing
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) face substantial computational overhead, critical information loss, and semantic incompleteness when processing long, multi-document prompts. To address these challenges, this paper proposes Reading-to-Compressing (R2C), a novel unsupervised prompt compression method that repurposes cross-attention scores from the Fusion-in-Decoder (FiD) architecture—without requiring pseudo-labels—to model global context and quantify passage importance, enabling end-to-end, semantically coherent compression. Its core innovation lies in leveraging FiD’s inherent cross-modal alignment capability for importance estimation, thereby eliminating the need for explicit supervision or manual annotation. Experiments demonstrate that R2C achieves an average compression ratio of 80%, improves out-of-domain task performance by 6%, and effectively preserves essential contextual information—striking a robust balance between inference efficiency and comprehension fidelity.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have achieved significant performance gains using advanced prompting techniques over various tasks. However, the increasing length of prompts leads to high computational costs and often obscures crucial information. Prompt compression has been proposed to alleviate these issues, but it faces challenges in (i) capturing the global context and (ii) training the compressor effectively. To tackle these challenges, we introduce a novel prompt compression method, namely Reading To Compressing (R2C), utilizing the Fusion-in-Decoder (FiD) architecture to identify the important information in the prompt. Specifically, the cross-attention scores of the FiD are used to discern essential chunks and sentences from the prompt. R2C effectively captures the global context without compromising semantic consistency while detouring the necessity of pseudo-labels for training the compressor. Empirical results show that R2C retains key contexts, enhancing the LLM performance by 6% in out-of-domain evaluations while reducing the prompt length by 80%.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Resource Consumption
Information Compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

R2C Method
FiD Decoder Utilization
Prompt Length Reduction
🔎 Similar Papers
No similar papers found.