EPERM: An Evidence Path Enhanced Reasoning Model for Knowledge Graph Question and Answering

📅 2025-02-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address hallucination and knowledge incompleteness in large language models (LLMs) for knowledge graph question answering (KGQA), this paper proposes a three-stage evidence path augmentation framework: subgraph retrieval, critical evidence path selection, and weighted reasoning. It is the first to formalize KGQA as a graph reasoning task and introduces an evidence path importance scoring mechanism that explicitly models the heterogeneous contributions of node types, relational semantics, and path structures to inference. By integrating LLM fine-tuning, structure-aware subgraph retrieval, path importance scoring, and weighted aggregation-based reasoning, the method significantly improves model interpretability and robustness. Experimental results demonstrate state-of-the-art performance across multiple KGQA benchmarks, effectively mitigating hallucination while enhancing knowledge coverage and factual grounding.

Technology Category

Application Category

📝 Abstract
Due to the remarkable reasoning ability, Large language models (LLMs) have demonstrated impressive performance in knowledge graph question answering (KGQA) tasks, which find answers to natural language questions over knowledge graphs (KGs). To alleviate the hallucinations and lack of knowledge issues of LLMs, existing methods often retrieve the question-related information from KGs to enrich the input context. However, most methods focus on retrieving the relevant information while ignoring the importance of different types of knowledge in reasoning, which degrades their performance. To this end, this paper reformulates the KGQA problem as a graphical model and proposes a three-stage framework named the Evidence Path Enhanced Reasoning Model (EPERM) for KGQA. In the first stage, EPERM uses the fine-tuned LLM to retrieve a subgraph related to the question from the original knowledge graph. In the second stage, EPERM filters out the evidence paths that faithfully support the reasoning of the questions, and score their importance in reasoning. Finally, EPERM uses the weighted evidence paths to reason the final answer. Since considering the importance of different structural information in KGs for reasoning, EPERM can improve the reasoning ability of LLMs in KGQA tasks. Extensive experiments on benchmark datasets demonstrate that EPERM achieves superior performances in KGQA tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhances reasoning in KGQA tasks
Addresses LLM hallucinations and knowledge gaps
Improves evidence path importance in reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned LLM subgraph retrieval
Evidence path filtering and scoring
Weighted evidence path reasoning
🔎 Similar Papers
No similar papers found.
Xiao Long
Xiao Long
University of Science and Technology of China | Alibaba Group
Knowledge GraphLarge Language ModelReasoning
Liansheng Zhuang
Liansheng Zhuang
Univerisity of Science and Technology of China
Computer VisionKnowledge GraphComputer Games
A
Aodi Li
University of Science and Technology of China, Hefei 230026, China
M
Minghong Yao
University of Science and Technology of China, Hefei 230026, China
S
Shafei Wang
Peng Cheng Laboratory, Shenzhen 518000, China