FiDeLiS: Faithful Reasoning in Large Language Model for Knowledge Graph Question Answering

πŸ“… 2024-05-22
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 5
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address hallucination and unfaithful reasoning in large language models (LLMs) for knowledge graph question answering (KGQA), this paper proposes FiDeLiSβ€”a unified framework that anchors stepwise reasoning chains to knowledge graphs (KGs) to enhance answer factualness and explainability. Methodologically, FiDeLiS introduces a novel stepwise beam search coupled with a deductive scoring mechanism, integrated with a Path-RAG module for efficient path pre-filtering; it enables verifiable, terminable, and traceable reasoning without model fine-tuning. By jointly modeling knowledge retrieval and logical inference, FiDeLiS significantly improves factual accuracy and reasoning transparency across multiple KGQA benchmarks. Moreover, it reduces computational overhead by over 40% compared to baseline approaches.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) are often challenged by generating erroneous or hallucinated responses, especially in complex reasoning tasks. Leveraging knowledge graphs (KGs) as external knowledge sources has emerged as a viable solution. However, existing KG-enhanced methods, either retrieval-based or agent-based, encounter difficulties in accurately retrieving knowledge and efficiently traversing KGs at scale. In this paper, we propose a unified framework, FiDeLiS, designed to improve the factuality of LLM responses by anchoring answers to verifiable reasoning steps retrieved from a KG. To achieve this, we leverage step-wise beam search with a deductive scoring function, allowing the LLM to validate each reasoning step and halt the search once the question is deducible. In addition, our Path-rag module pre-selects a smaller candidate set for each beam search step, reducing computational costs by narrowing the search space. Extensive experiments show that our training-free and efficient approach outperforms strong baselines, enhancing both factuality and interpretability.
Problem

Research questions and friction points this paper is trying to address.

Improve factuality in LLM responses
Accurate retrieval from knowledge graphs
Efficient reasoning in complex tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Step-wise beam search
Deductive scoring function
Path-rag module pre-selection