๐ค AI Summary
RAG systems suffer from a fundamental trade-off: inaccurate retrieval ranking leads to redundant context, high latency, and low answer accuracyโabstract compression reduces token count but increases latency, while extractive compression often ignores contextual dependencies due to non-adaptive sentence selection. This paper introduces the first context-aware extractive compression framework, which explicitly models sentence-level semantic dependencies and employs a lightweight, trainable classifier for query-adaptive, parallel sentence selection. The method dynamically balances retrieval quality and query complexity while preserving critical structural information. Evaluated on single-hop and multi-hop question answering benchmarks, our approach outperforms both uncompressed baselines and state-of-the-art compression methods in accuracy, reduces inference latency by 37%, and cuts input token count by 52%.
๐ Abstract
We introduce EXIT, an extractive context compression framework that enhances both the effectiveness and efficiency of retrieval-augmented generation (RAG) in question answering (QA). Current RAG systems often struggle when retrieval models fail to rank the most relevant documents, leading to the inclusion of more context at the expense of latency and accuracy. While abstractive compression methods can drastically reduce token counts, their token-by-token generation process significantly increases end-to-end latency. Conversely, existing extractive methods reduce latency but rely on independent, non-adaptive sentence selection, failing to fully utilize contextual information. EXIT addresses these limitations by classifying sentences from retrieved documents - while preserving their contextual dependencies - enabling parallelizable, context-aware extraction that adapts to query complexity and retrieval quality. Our evaluations on both single-hop and multi-hop QA tasks show that EXIT consistently surpasses existing compression methods and even uncompressed baselines in QA accuracy, while also delivering substantial reductions in inference time and token count. By improving both effectiveness and efficiency, EXIT provides a promising direction for developing scalable, high-quality QA solutions in RAG pipelines. Our code is available at https://github.com/ThisIsHwang/EXIT