🤖 AI Summary
In knowledge graph (KG)-based retrieval-augmented generation (RAG), weak retrievers suffer from spurious supervision signals and disordered evidence due to the absence of ground-truth annotations and the abstract nature of KG structure. To address this, we propose ReG: a retrieval enhancement framework that (1) leverages large language model (LLM) feedback to denoise weak supervision signals, mitigating error accumulation under annotation scarcity; and (2) introduces a structure-aware reorganization module that models unordered retrieved results as logically coherent evidence chains. ReG supports multiple LLM backbones and requires no manual annotation, substantially reducing dependence on supervised data. Experiments demonstrate up to 10% performance gains over baselines on mainstream benchmarks; ReG surpasses state-of-the-art methods using only 5% of training data, reduces inference token consumption by 30%, and exhibits strong cross-KG generalization capability.
📝 Abstract
Graph-based retrieval-augmented generation (RAG) enables large language models (LLMs) to ground responses with structured external knowledge from up-to-date knowledge graphs (KGs) and reduce hallucinations. However, LLMs often rely on a weak retriever in graph-based RAG: I) Due to the lack of ground truth, the retriever is often trained on weak supervision, which often introduces spurious signals to the LLMs. II) Due to the abstraction of graph data, the retrieved knowledge is often presented in unorganized forms. To mitigate the issue, we present Refined Graph-based RAG (ReG) to align weak retrievers to LLMs for graph-based RAG. Specifically, ReG incorporates LLM feedback to get rid of spurious signals and improve the quality of the supervision. Meanwhile, ReG introduces a structure-aware reorganization module to refactor the retrieval results into logically coherent evidence chains. Experiments on prominent benchmarks demonstrate that ReG significantly and consistently brings improvements across different LLM backbones by up to 10%. The improved supervision quality enables ReG to match the state-of-the-art performance with 5% training data and to transfer to out-of-distribution KGs. Notably, when adopted to reasoning-based LLMs, ReG reduces the reasoning token cost by up to 30% and improves the performance by up to 4%.