🤖 AI Summary
To address the high computational cost and noise interference caused by long contexts in retrieval-augmented generation (RAG), this paper proposes a dynamic context pruning method. Unlike conventional heuristic or ranking-based pruning approaches, we formulate context pruning as a sequence labeling task—enabling fine-grained, adaptive identification and retention of salient text segments. We further design a unified pruning-and-reranking architecture that jointly optimizes information filtering and importance scoring. To ensure robust generalization across domains and resilience to variations in context length and relevance density, we adopt a multi-domain mixed training strategy. Evaluated on diverse RAG benchmarks, our method achieves over 90% text compression with negligible performance degradation (average accuracy drop <0.3%), reduces inference latency by 65%, and significantly improves both end-to-end generation efficiency and answer accuracy.
📝 Abstract
Retrieval-augmented generation improves various aspects of large language models (LLMs) generation, but suffers from computational overhead caused by long contexts as well as the propagation of irrelevant retrieved information into generated responses. Context pruning deals with both aspects, by removing irrelevant parts of retrieved contexts before LLM generation. Existing context pruning approaches are however limited, and do not provide a universal model that would be both efficient and robust in a wide range of scenarios, e.g., when contexts contain a variable amount of relevant information or vary in length, or when evaluated on various domains. In this work, we close this gap and introduce Provence (Pruning and Reranking Of retrieVEd relevaNt ContExts), an efficient and robust context pruner for Question Answering, which dynamically detects the needed amount of pruning for a given context and can be used out-of-the-box for various domains. The three key ingredients of Provence are formulating the context pruning task as sequence labeling, unifying context pruning capabilities with context reranking, and training on diverse data. Our experimental results show that Provence enables context pruning with negligible to no drop in performance, in various domains and settings, at almost no cost in a standard RAG pipeline. We also conduct a deeper analysis alongside various ablations to provide insights into training context pruners for future work.