Knowledge Reasoning Language Model: Unifying Knowledge and Language for Inductive Knowledge Graph Reasoning

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Inductive knowledge graph reasoning (KGR) faces fundamental challenges in modeling uncertainty arising from unseen entities and relations. Existing large language model (LLM)-based approaches suffer from two key limitations: (1) distortion of LLMs’ intrinsic knowledge due to sparse graph-structured context, and (2) persistent generation hallucinations that are difficult to suppress. To address these, we propose KRL—a novel framework featuring a purpose-built Knowledge Reasoning Language (KRL) instruction format and a dedicated tokenizer. KRL introduces a dynamic knowledge memory mechanism that synergistically integrates LLM priors with graph topology, and incorporates a structure-aware next-entity predictor to explicitly constrain the generation process. Evaluated across 25 real-world inductive KGR benchmarks, KRL achieves significant improvements over state-of-the-art methods, demonstrating superior generalization and reliability in both zero-shot and fine-tuned settings.

Technology Category

Application Category

📝 Abstract
Inductive Knowledge Graph Reasoning (KGR) aims to discover facts in open-domain KGs containing unknown entities and relations, which poses a challenge for KGR models in comprehending uncertain KG components. Existing studies have proposed Knowledge Graph Foundation Models (KGFMs) that learn structural invariances across KGs to handle this uncertainty. Recently, Large Language Models (LLMs) have demonstrated strong capabilities for open-domain knowledge reasoning. As a result, the latest research has focused on LLM-based KGFMs that integrate LLM knowledge with KG context for inductive KGR. However, the intrinsic knowledge of LLMs may be overshadowed by sparse KG context, leading to LLM knowledge distortion, which can cause irreversible damage to model reasoning. Moreover, existing LLM-based KGR methods still struggle to fully constrain generative hallucinations in LLMs, severely limiting the credibility of reasoning results. To address these limitations, we propose a Knowledge Reasoning Language Model (KRLM) that achieves unified coordination between LLM knowledge and KG context throughout the KGR process. Specifically, we design a Knowledge Reasoning Language (KRL) instruction format and a KRL tokenizer to align LLM knowledge with KG representations. Then, we propose a KRL attention layer that coordinates intrinsic LLM knowledge with additional KG context through a dynamic knowledge memory mechanism. Finally, a structure-aware next-entity predictor is proposed, which strictly constrains the reasoning results within a trustworthy knowledge domain. Extensive experimental results on 25 real-world inductive KGR datasets demonstrate the significant superiority of the proposed KRLMfootnote{Our source codes are available at https://anonymous.4open.science/r/KRLM-EA36 in both zero-shot reasoning and fine-tuning scenarios.
Problem

Research questions and friction points this paper is trying to address.

Addresses inductive knowledge graph reasoning with unknown entities and relations
Prevents LLM knowledge distortion caused by sparse KG context
Constrains generative hallucinations to improve reasoning credibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligns LLM knowledge with KG representations via instruction format
Coordinates LLM knowledge and KG context through dynamic memory
Constrains reasoning results within trustworthy knowledge domain
🔎 Similar Papers
No similar papers found.
X
Xingrui Zhuo
The Key Laboratory of Knowledge Engineering with Big Data (the Ministry of Education of China), School of Computer Science and Information Engineering, Hefei University of Technology, China
J
Jiapu Wang
Nanjing University of Science and Technology, China
Gongqing Wu
Gongqing Wu
Hefei University of Technology
Web IntelligenceData Mining
Z
Zhongyuan Wang
Hefei CSG Smart Robot Technology Co., Ltd., Hefei, China; CSG Smart Science & Technology Co., Ltd., Shanghai, China
J
Jichen Zhang
Shandong Inspur Science Research Institute, Jinan, China
Shirui Pan
Shirui Pan
Professor, ARC Future Fellow, FQA, Director of TrustAGI Lab, Griffith University
Data MiningMachine LearningGraph Neural NetworksTrustworthy AITime Series
X
Xindong Wu
The Key Laboratory of Knowledge Engineering with Big Data (the Ministry of Education of China), School of Computer Science and Information Engineering, Hefei University of Technology, China