π€ AI Summary
To address hallucination and unfaithful reasoning in large language models (LLMs) for knowledge graph question answering (KGQA), this paper proposes FiDeLiSβa unified framework that anchors stepwise reasoning chains to knowledge graphs (KGs) to enhance answer factualness and explainability. Methodologically, FiDeLiS introduces a novel stepwise beam search coupled with a deductive scoring mechanism, integrated with a Path-RAG module for efficient path pre-filtering; it enables verifiable, terminable, and traceable reasoning without model fine-tuning. By jointly modeling knowledge retrieval and logical inference, FiDeLiS significantly improves factual accuracy and reasoning transparency across multiple KGQA benchmarks. Moreover, it reduces computational overhead by over 40% compared to baseline approaches.
π Abstract
Large language models (LLMs) are often challenged by generating erroneous or hallucinated responses, especially in complex reasoning tasks. Leveraging knowledge graphs (KGs) as external knowledge sources has emerged as a viable solution. However, existing KG-enhanced methods, either retrieval-based or agent-based, encounter difficulties in accurately retrieving knowledge and efficiently traversing KGs at scale. In this paper, we propose a unified framework, FiDeLiS, designed to improve the factuality of LLM responses by anchoring answers to verifiable reasoning steps retrieved from a KG. To achieve this, we leverage step-wise beam search with a deductive scoring function, allowing the LLM to validate each reasoning step and halt the search once the question is deducible. In addition, our Path-rag module pre-selects a smaller candidate set for each beam search step, reducing computational costs by narrowing the search space. Extensive experiments show that our training-free and efficient approach outperforms strong baselines, enhancing both factuality and interpretability.