🤖 AI Summary
This work proposes WriteBack-RAG, a novel framework that treats the knowledge base in retrieval-augmented generation (RAG) systems as a trainable component, addressing the limitation of traditional RAG approaches that rely on static corpora and struggle to integrate critical facts scattered amid noise. By distilling evidence from effective retrieval results into compact knowledge units and writing them back into the original corpus, WriteBack-RAG dynamically enhances retrieval quality. The framework is compatible with any RAG pipeline and large language model, and enables cross-method knowledge transfer. Extensive experiments across four RAG methods, six benchmark datasets, and two large language models demonstrate an average accuracy improvement of 2.14%, confirming the efficacy of knowledge distillation in directly enriching the corpus itself.
📝 Abstract
The knowledge base in a retrieval-augmented generation (RAG) system is typically assembled once and never revised, even though the facts a query requires are often fragmented across documents and buried in irrelevant content. We argue that the knowledge base should be treated as a trainable component and propose WriteBack-RAG, a framework that uses labeled examples to identify where retrieval succeeds, isolate the relevant documents, and distill them into compact knowledge units that are indexed alongside the original corpus. Because the method modifies only the corpus, it can be applied once as an offline preprocessing step and combined with any RAG pipeline. Across four RAG methods, six benchmarks, and two LLM backbones, WriteBack-RAG improves every evaluated setting, with gains averaging +2.14%. Cross-method transfer experiments further show that the distilled knowledge benefits RAG pipelines other than the one used to produce it, confirming that the improvement resides in the corpus itself.