Large Language Model-Enhanced Symbolic Reasoning for Knowledge Base Completion

📅 2025-01-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit strong semantic understanding but suffer from low reliability in knowledge base completion (KBC), whereas symbolic rule-based methods offer logical rigor yet lack generalizability. Method: This paper proposes a three-stage LLM-enhanced symbolic reasoning framework: (1) subgraph extraction, (2) LLM-driven rule generation, and (3) rule refinement under joint constraints of logical consistency and empirical support. Contribution/Results: We introduce the first verifiable rule discovery mechanism that simultaneously ensures rule diversity and mitigates LLM hallucination. Our approach achieves end-to-end explainable and verifiable LLM-augmented symbolic reasoning. On standard benchmarks—including FB15k-237 and WN18RR—it improves completion accuracy by up to 12.6% over both standalone LLMs and traditional rule-learning methods, delivering both state-of-the-art performance and strong interpretability.

Technology Category

Application Category

📝 Abstract
Integrating large language models (LLMs) with rule-based reasoning offers a powerful solution for improving the flexibility and reliability of Knowledge Base Completion (KBC). Traditional rule-based KBC methods offer verifiable reasoning yet lack flexibility, while LLMs provide strong semantic understanding yet suffer from hallucinations. With the aim of combining LLMs' understanding capability with the logical and rigor of rule-based approaches, we propose a novel framework consisting of a Subgraph Extractor, an LLM Proposer, and a Rule Reasoner. The Subgraph Extractor first samples subgraphs from the KB. Then, the LLM uses these subgraphs to propose diverse and meaningful rules that are helpful for inferring missing facts. To effectively avoid hallucination in LLMs' generations, these proposed rules are further refined by a Rule Reasoner to pinpoint the most significant rules in the KB for Knowledge Base Completion. Our approach offers several key benefits: the utilization of LLMs to enhance the richness and diversity of the proposed rules and the integration with rule-based reasoning to improve reliability. Our method also demonstrates strong performance across diverse KB datasets, highlighting the robustness and generalizability of the proposed framework.
Problem

Research questions and friction points this paper is trying to address.

Language Models
Rule-based Systems
Knowledge Base Completion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Rule-based Reasoning
Knowledge Base Completion
🔎 Similar Papers