EXIT: Context-Aware Extractive Compression for Enhancing Retrieval-Augmented Generation

๐Ÿ“… 2024-12-17
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
RAG systems suffer from a fundamental trade-off: inaccurate retrieval ranking leads to redundant context, high latency, and low answer accuracyโ€”abstract compression reduces token count but increases latency, while extractive compression often ignores contextual dependencies due to non-adaptive sentence selection. This paper introduces the first context-aware extractive compression framework, which explicitly models sentence-level semantic dependencies and employs a lightweight, trainable classifier for query-adaptive, parallel sentence selection. The method dynamically balances retrieval quality and query complexity while preserving critical structural information. Evaluated on single-hop and multi-hop question answering benchmarks, our approach outperforms both uncompressed baselines and state-of-the-art compression methods in accuracy, reduces inference latency by 37%, and cuts input token count by 52%.

Technology Category

Application Category

๐Ÿ“ Abstract
We introduce EXIT, an extractive context compression framework that enhances both the effectiveness and efficiency of retrieval-augmented generation (RAG) in question answering (QA). Current RAG systems often struggle when retrieval models fail to rank the most relevant documents, leading to the inclusion of more context at the expense of latency and accuracy. While abstractive compression methods can drastically reduce token counts, their token-by-token generation process significantly increases end-to-end latency. Conversely, existing extractive methods reduce latency but rely on independent, non-adaptive sentence selection, failing to fully utilize contextual information. EXIT addresses these limitations by classifying sentences from retrieved documents - while preserving their contextual dependencies - enabling parallelizable, context-aware extraction that adapts to query complexity and retrieval quality. Our evaluations on both single-hop and multi-hop QA tasks show that EXIT consistently surpasses existing compression methods and even uncompressed baselines in QA accuracy, while also delivering substantial reductions in inference time and token count. By improving both effectiveness and efficiency, EXIT provides a promising direction for developing scalable, high-quality QA solutions in RAG pipelines. Our code is available at https://github.com/ThisIsHwang/EXIT
Problem

Research questions and friction points this paper is trying to address.

Enhances retrieval-augmented generation efficiency and accuracy
Reduces latency and token count in QA systems
Improves context-aware extraction for adaptive query complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extractive compression preserving contextual dependencies
Parallelizable context-aware sentence classification
Adaptive extraction based on query complexity
๐Ÿ”Ž Similar Papers
No similar papers found.