CORE: Lossless Compression for Retrieval-Augmented LLMs via Reinforcement Learning

📅 2025-08-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the excessive input length and high computational overhead caused by redundant documents in retrieval-augmented generation (RAG), this paper proposes a lossless context compression framework based on reinforcement learning. Unlike conventional methods relying on fixed heuristics or predefined supervision labels, our approach employs end-to-end training with task performance—specifically, final answer accuracy—as the sole reward signal, enabling dynamic, task-adaptive document compression. We jointly optimize the compressor and the large language model using Generalized Reinforcement Policy Optimization (GRPO). Evaluated on four knowledge-intensive benchmarks, our method achieves an average compression ratio of 3% while improving Exact Match scores by 3.3 points, without degrading original task performance. The key contribution is the first application of task-driven reinforcement learning to RAG context compression—eliminating dependence on hand-crafted rules and labeled data—thereby simultaneously enhancing inference efficiency and answer accuracy.

Technology Category

Application Category

📝 Abstract
Retrieval-Augmented Generation (RAG) has emerged as a promising approach to enhance the timeliness of knowledge and the factual accuracy of responses in Large Language Models (LLMs). However, the inclusion of excessive retrieved documents substantially increases the input length, leading to higher computational costs. Previous studies have attempted to compress retrieved documents into shorter texts before in-context integration, but such methods often compromise end-task performance. The lack of well-defined compression targets forces many approaches to rely on fixed heuristics, which cannot guarantee that the compressed content will effectively support the end task. To address these limitations, we propose CORE, a novel method designed to achieve lossless context compression for RAG. CORE employs reinforcement learning to optimize the compression process without relying on predefined compression labels. Specifically, it utilizes end-task performance as a reward signal and applies Generalized Reinforcement Learning Policy Optimization (GRPO) to train the compressor. This end-to-end training framework enables the compressor to generate summaries that maximize the accuracy of answers generated by the LLM. Extensive experiments on four datasets demonstrate the superiority of our approach. With a high compression ratio of 3%, our method not only avoids performance degradation compared to prepending full documents across all datasets but also improves the average Exact Match (EM) score by 3.3 points. The code will be released soon.
Problem

Research questions and friction points this paper is trying to address.

Compresses retrieved documents in RAG without performance loss
Uses reinforcement learning to optimize compression for LLMs
Eliminates need for predefined compression labels via end-task rewards
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning optimizes compression without labels
GRPO policy training maximizes LLM answer accuracy
Lossless compression with 3% ratio improves performance
🔎 Similar Papers
No similar papers found.