🤖 AI Summary
Static retrieval in GraphRAG often overlooks bridging documents that connect otherwise unrelated entities, leading to multi-hop reasoning failure and hallucination. This work presents the first systematic investigation into the mechanistic role of iterative retrieval in GraphRAG, revealing its success hinges on the synergy between graph-structure awareness and reasoning-chain guidance. We propose Bridge-Guided Dual-Thought Retrieval (BGDTR), a novel framework that explicitly models evidence associations via dual-path reasoning—fact-oriented and bridge-oriented—and incorporates reasoning-chain-guided re-ranking alongside dynamic evidence scoring to improve bridging-document recall. Experiments across diverse GraphRAG configurations demonstrate that BGDTR significantly enhances accuracy on complex multi-hop questions, consistently mitigates evidence omission and hallucination, and establishes an interpretable, reusable paradigm for iterative retrieval in graph-augmented generation.
📝 Abstract
Retrieval-augmented generation (RAG) is a powerful paradigm for improving large language models (LLMs) on knowledge-intensive question answering. Graph-based RAG (GraphRAG) leverages entity-relation graphs to support multi-hop reasoning, but most systems still rely on static retrieval. When crucial evidence, especially bridge documents that connect disjoint entities, is absent, reasoning collapses and hallucinations persist. Iterative retrieval, which performs multiple rounds of evidence selection, has emerged as a promising alternative, yet its role within GraphRAG remains poorly understood. We present the first systematic study of iterative retrieval in GraphRAG, analyzing how different strategies interact with graph-based backbones and under what conditions they succeed or fail. Our findings reveal clear opportunities: iteration improves complex multi-hop questions, helps promote bridge documents into leading ranks, and different strategies offer complementary strengths. At the same time, pitfalls remain: naive expansion often introduces noise that reduces precision, gains are limited on single-hop or simple comparison questions, and several bridge evidences still be buried too deep to be effectively used. Together, these results highlight a central bottleneck, namely that GraphRAG's effectiveness depends not only on recall but also on whether bridge evidence is consistently promoted into leading positions where it can support reasoning chains. To address this challenge, we propose Bridge-Guided Dual-Thought-based Retrieval (BDTR), a simple yet effective framework that generates complementary thoughts and leverages reasoning chains to recalibrate rankings and bring bridge evidence into leading positions. BDTR achieves consistent improvements across diverse GraphRAG settings and provides guidance for the design of future GraphRAG systems.