🤖 AI Summary
To address hallucination and knowledge incompleteness in large language models (LLMs) for knowledge graph question answering (KGQA), this paper proposes a three-stage evidence path augmentation framework: subgraph retrieval, critical evidence path selection, and weighted reasoning. It is the first to formalize KGQA as a graph reasoning task and introduces an evidence path importance scoring mechanism that explicitly models the heterogeneous contributions of node types, relational semantics, and path structures to inference. By integrating LLM fine-tuning, structure-aware subgraph retrieval, path importance scoring, and weighted aggregation-based reasoning, the method significantly improves model interpretability and robustness. Experimental results demonstrate state-of-the-art performance across multiple KGQA benchmarks, effectively mitigating hallucination while enhancing knowledge coverage and factual grounding.
📝 Abstract
Due to the remarkable reasoning ability, Large language models (LLMs) have demonstrated impressive performance in knowledge graph question answering (KGQA) tasks, which find answers to natural language questions over knowledge graphs (KGs). To alleviate the hallucinations and lack of knowledge issues of LLMs, existing methods often retrieve the question-related information from KGs to enrich the input context. However, most methods focus on retrieving the relevant information while ignoring the importance of different types of knowledge in reasoning, which degrades their performance. To this end, this paper reformulates the KGQA problem as a graphical model and proposes a three-stage framework named the Evidence Path Enhanced Reasoning Model (EPERM) for KGQA. In the first stage, EPERM uses the fine-tuned LLM to retrieve a subgraph related to the question from the original knowledge graph. In the second stage, EPERM filters out the evidence paths that faithfully support the reasoning of the questions, and score their importance in reasoning. Finally, EPERM uses the weighted evidence paths to reason the final answer. Since considering the importance of different structural information in KGs for reasoning, EPERM can improve the reasoning ability of LLMs in KGQA tasks. Extensive experiments on benchmark datasets demonstrate that EPERM achieves superior performances in KGQA tasks.