Reliable Reasoning Path: Distilling Effective Guidance for LLM Reasoning with Knowledge Graphs

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the hallucination and poor performance of large language models (LLMs) on knowledge-intensive tasks—stemming from insufficient reliable reasoning grounds—this paper proposes a knowledge graph (KG)-enhanced method prioritizing reasoning path reliability over mere factual supplementation. We jointly model KG structure via relation embedding and bidirectional distribution learning, and introduce a semantic-guided, revisable path evaluation and refinement mechanism to extract question-adaptive, logically consistent reasoning paths. The approach is plug-and-play, compatible with diverse LLMs without architectural modification. Evaluated on two public benchmarks, it achieves state-of-the-art performance, significantly improving both answer accuracy and reasoning consistency for complex question answering.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) often struggle with knowledge-intensive tasks due to a lack of background knowledge and a tendency to hallucinate. To address these limitations, integrating knowledge graphs (KGs) with LLMs has been intensively studied. Existing KG-enhanced LLMs focus on supplementary factual knowledge, but still struggle with solving complex questions. We argue that refining the relationships among facts and organizing them into a logically consistent reasoning path is equally important as factual knowledge itself. Despite their potential, extracting reliable reasoning paths from KGs poses the following challenges: the complexity of graph structures and the existence of multiple generated paths, making it difficult to distinguish between useful and redundant ones. To tackle these challenges, we propose the RRP framework to mine the knowledge graph, which combines the semantic strengths of LLMs with structural information obtained through relation embedding and bidirectional distribution learning. Additionally, we introduce a rethinking module that evaluates and refines reasoning paths according to their significance. Experimental results on two public datasets show that RRP achieves state-of-the-art performance compared to existing baseline methods. Moreover, RRP can be easily integrated into various LLMs to enhance their reasoning abilities in a plug-and-play manner. By generating high-quality reasoning paths tailored to specific questions, RRP distills effective guidance for LLM reasoning.
Problem

Research questions and friction points this paper is trying to address.

LLMs lack background knowledge and hallucinate in knowledge tasks
Existing KG-enhanced LLMs struggle with complex question reasoning
Extracting reliable reasoning paths from KGs faces structural complexity challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines LLM semantics with KG structural embeddings
Introduces rethinking module for path significance evaluation
Plug-and-play integration for diverse LLM enhancement
Y
Yilin Xiao
The Hong Kong Polytechnic University, Hong Kong SAR, China
Chuang Zhou
Chuang Zhou
The Hong Kong Polytechnic University
Large Language ModelsGraph LearningRecommendation System
Qinggang Zhang
Qinggang Zhang
The Hong Kong Polytechnic University
Knowledge GraphsLarge Language ModelsRetrieval-Augmented GenerationText-to-SQL
B
Bo Li
The Hong Kong Polytechnic University, Hong Kong SAR, China
Q
Qing Li
The Hong Kong Polytechnic University, Hong Kong SAR, China
X
Xiao Huang
The Hong Kong Polytechnic University, Hong Kong SAR, China