Deliberation on Priors: Trustworthy Reasoning of Large Language Models on Knowledge Graphs

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address hallucination in large language models (LLMs) during knowledge graph (KG)-enhanced retrieval-augmented generation—caused by insufficient exploitation of prior knowledge—this paper proposes the *Distrust-and-Prune* (DP) framework for trustworthy reasoning. Methodologically, DP introduces a novel dual-path prior injection mechanism integrating *structural priors* (e.g., KG topology) and *constraint priors* (explicit and implicit logical constraints), guided by Kahneman–Tversky-inspired optimization for progressive knowledge distillation. It further establishes a reasoning–introspection co-verification paradigm, supported by KG structural encoding, constraint extraction, and reflective path validation. Evaluated on ComplexWebQuestions, DP achieves a 13% absolute improvement in Hit@1, setting a new state-of-the-art. It significantly enhances relational path faithfulness and response reliability while demonstrating strong generalization and practical applicability.

Technology Category

Application Category

📝 Abstract
Knowledge graph-based retrieval-augmented generation seeks to mitigate hallucinations in Large Language Models (LLMs) caused by insufficient or outdated knowledge. However, existing methods often fail to fully exploit the prior knowledge embedded in knowledge graphs (KGs), particularly their structural information and explicit or implicit constraints. The former can enhance the faithfulness of LLMs' reasoning, while the latter can improve the reliability of response generation. Motivated by these, we propose a trustworthy reasoning framework, termed Deliberation over Priors (DP), which sufficiently utilizes the priors contained in KGs. Specifically, DP adopts a progressive knowledge distillation strategy that integrates structural priors into LLMs through a combination of supervised fine-tuning and Kahneman-Tversky optimization, thereby improving the faithfulness of relation path generation. Furthermore, our framework employs a reasoning-introspection strategy, which guides LLMs to perform refined reasoning verification based on extracted constraint priors, ensuring the reliability of response generation. Extensive experiments on three benchmark datasets demonstrate that DP achieves new state-of-the-art performance, especially a Hit@1 improvement of 13% on the ComplexWebQuestions dataset, and generates highly trustworthy responses. We also conduct various analyses to verify its flexibility and practicality. The code is available at https://github.com/reml-group/Deliberation-on-Priors.
Problem

Research questions and friction points this paper is trying to address.

Mitigate LLM hallucinations via knowledge graph priors
Enhance reasoning faithfulness with structural KG information
Improve response reliability through constraint-based verification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive knowledge distillation with KG structural priors
Kahneman-Tversky optimization for faithful relation paths
Reasoning-introspection with constraint priors for reliability
🔎 Similar Papers
No similar papers found.
J
Jie Ma
MOE KLINNS Lab, Xi’an Jiaotong University
Ning Qu
Ning Qu
NIO USA, Waymo, Baidu USA, Google, Carnegie Mellon University, Peking University
Operating SystemSecurity
Z
Zhitao Gao
MOE KLINNS Lab, Xi’an Jiaotong University
Rui Xing
Rui Xing
University of Melbourne
Natural Language ProcessingArtificial IntelligenceDeep Learning
J
Jun Liu
School of Computer Science and Technology, Xi’an Jiaotong University; Shaanxi Province Key Laboratory of Big Data Knowledge Engineering
Hongbin Pei
Hongbin Pei
Xi'an Jiaotong University
Machine learningData miningGraph-structured dataComplex network
J
Jiang Xie
School of Artificial Intelligence, Chongqing University of Post and Telecommunications
L
Linyun Song
School of Computer Science, Northwestern Polytechnical University
Pinghui Wang
Pinghui Wang
Xi'an Jiaotong University
J
Jing Tao
MOE KLINNS Lab, Xi’an Jiaotong University
Zhou Su
Zhou Su
Xi'an Jiaotong University