RFKG-CoT: Relation-Driven Adaptive Hop-count Selection and Few-Shot Path Guidance for Knowledge-Aware QA

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address hallucination in knowledge-intensive question answering (KGQA) caused by the parametric knowledge limitations of large language models (LLMs), this paper proposes KG-RA—a novel method integrating knowledge graph (KG) path–enhanced reasoning. The approach tackles the problem through two core innovations: (1) a relation-mask–driven dynamic hop-selection mechanism that overcomes the fixed-hop constraint of conventional question-driven path retrieval; and (2) a few-shot, path-guided chain-of-thought paradigm in “question–path–answer” format, explicitly modeling path dependency in reasoning. KG-RA jointly incorporates relational modeling, adaptive hop control, few-shot in-context learning, and KG path injection. Evaluated on four KGQA benchmarks—including WebQSP—KG-RA boosts the accuracy of Llama2-7B by up to 14.7 percentage points. Ablation studies confirm strong synergistic effects between the two modules, validating their complementary roles in mitigating LLM hallucination through structured, interpretable reasoning.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) often generate hallucinations in knowledge-intensive QA due to parametric knowledge limitations. While existing methods like KG-CoT improve reliability by integrating knowledge graph (KG) paths, they suffer from rigid hop-count selection (solely question-driven) and underutilization of reasoning paths (lack of guidance). To address this, we propose RFKG-CoT: First, it replaces the rigid hop-count selector with a relation-driven adaptive hop-count selector that dynamically adjusts reasoning steps by activating KG relations (e.g., 1-hop for direct "brother" relations, 2-hop for indirect "father-son" chains), formalized via a relation mask. Second, it introduces a few-shot in-context learning path guidance mechanism with CoT (think) that constructs examples in a "question-paths-answer" format to enhance LLMs' ability to understand reasoning paths. Experiments on four KGQA benchmarks show RFKG-CoT improves accuracy by up to 14.7 pp (Llama2-7B on WebQSP) over KG-CoT. Ablations confirm the hop-count selector and the path prompt are complementary, jointly transforming KG evidence into more faithful answers.
Problem

Research questions and friction points this paper is trying to address.

Adapts hop-count selection for KG reasoning dynamically
Guides LLMs with few-shot examples to understand paths
Reduces hallucinations in knowledge-intensive QA tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive hop-count selector using relation masks
Few-shot in-context learning with path guidance
Combining relation-driven hops and path prompts
🔎 Similar Papers
No similar papers found.
C
Chao Zhang
School of Computer Science and Technology, Soochow University, China
M
Minghan Li
School of Computer Science and Technology, Soochow University, China
T
Tianrui Lv
School of Computer Science and Technology, Soochow University, China
Guodong Zhou
Guodong Zhou
Soochow University, China
Natural Language ProcessingArtificial Intelligence