🤖 AI Summary
To address the challenges of poor coordination between reasoning and external knowledge retrieval, as well as limited interpretability in multi-hop question answering, this paper proposes a dynamic knowledge graph construction framework that integrates question decomposition with breadth-first search (BFS)-guided reasoning. The method retrieves and structurally organizes external knowledge on-demand during inference, explicitly generating multi-hop evidence chains to enable synchronous evolution of reasoning paths and their supporting knowledge. Core techniques include retrieval-augmented generation, dynamic knowledge subgraph construction, stepwise question decomposition, and BFS-driven iterative reasoning. Evaluated on MuSiQue, 2WikiMultiHopQA, and HotpotQA, the approach achieves state-of-the-art performance: average exact match (EM) improves by 2.57% and F1 by 2.13%; on HotpotQA specifically, EM and F1 increase by 4.70% and 3.44%, respectively—demonstrating substantial gains in both accuracy and interpretability.
📝 Abstract
Recent progress in retrieval-augmented generation (RAG) has led to more accurate and interpretable multi-hop question answering (QA). Yet, challenges persist in integrating iterative reasoning steps with external knowledge retrieval. To address this, we introduce StepChain GraphRAG, a framework that unites question decomposition with a Breadth-First Search (BFS) Reasoning Flow for enhanced multi-hop QA. Our approach first builds a global index over the corpus; at inference time, only retrieved passages are parsed on-the-fly into a knowledge graph, and the complex query is split into sub-questions. For each sub-question, a BFS-based traversal dynamically expands along relevant edges, assembling explicit evidence chains without overwhelming the language model with superfluous context. Experiments on MuSiQue, 2WikiMultiHopQA, and HotpotQA show that StepChain GraphRAG achieves state-of-the-art Exact Match and F1 scores. StepChain GraphRAG lifts average EM by 2.57% and F1 by 2.13% over the SOTA method, achieving the largest gain on HotpotQA (+4.70% EM, +3.44% F1). StepChain GraphRAG also fosters enhanced explainability by preserving the chain-of-thought across intermediate retrieval steps. We conclude by discussing how future work can mitigate the computational overhead and address potential hallucinations from large language models to refine efficiency and reliability in multi-hop QA.