๐ค AI Summary
Existing GraphRAG approaches are constrained by static knowledge graphs, which often suffer from incomplete structures that disrupt reasoning paths and are further compromised by low signal-to-noise ratios in factual evidence. To address these limitations, this work proposes Relink, a novel framework that introduces a โreason-and-constructโ paradigm to dynamically build query-oriented evidence graphs. Relink instantiates missing relations on-the-fly from raw text, synergistically integrating structured knowledge graphs with a latent relation pool, and employs a query-aware unified scoring mechanism to jointly select high-quality candidate facts. This approach adaptively repairs broken reasoning chains and proactively filters noise, substantially enhancing the faithfulness and precision of the resulting evidence graph. Evaluated on five open-domain question answering benchmarks, Relink achieves consistent improvements, averaging +5.4% in Exact Match and +5.2% in F1 score over state-of-the-art GraphRAG baselines.
๐ Abstract
Graph-based Retrieval-Augmented Generation (GraphRAG) mitigates hallucinations in Large Language Models (LLMs) by grounding them in structured knowledge. However, current GraphRAG methods are constrained by a prevailing \textit{build-then-reason} paradigm, which relies on a static, pre-constructed Knowledge Graph (KG). This paradigm faces two critical challenges. First, the KG's inherent incompleteness often breaks reasoning paths. Second, the graph's low signal-to-noise ratio introduces distractor facts, presenting query-relevant but misleading knowledge that disrupts the reasoning process. To address these challenges, we argue for a \textit{reason-and-construct} paradigm and propose Relink, a framework that dynamically builds a query-specific evidence graph. To tackle incompleteness, \textbf{Relink} instantiates required facts from a latent relation pool derived from the original text corpus, repairing broken paths on the fly. To handle misleading or distractor facts, Relink employs a unified, query-aware evaluation strategy that jointly considers candidates from both the KG and latent relations, selecting those most useful for answering the query rather than relying on their pre-existence. This empowers Relink to actively discard distractor facts and construct the most faithful and precise evidence path for each query. Extensive experiments on five Open-Domain Question Answering benchmarks show that Relink achieves significant average improvements of 5.4\% in EM and 5.2\% in F1 over leading GraphRAG baselines, demonstrating the superiority of our proposed framework.