Rule2Text: Natural Language Explanation of Logical Rules in Knowledge Graphs

📅 2025-07-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Logical rules in knowledge graphs often lack human readability, hindering their utility in factual reasoning, error detection, and pattern discovery. This paper presents the first systematic investigation into leveraging large language models (LLMs) to generate natural-language explanations for such logical rules. We propose a prompt engineering framework integrating zero-shot/few-shot prompting, type-aware enhancement, and chain-of-thought reasoning, evaluated on rules extracted by AMIE 3.5.1. Human evaluation demonstrates that the generated explanations achieve high correctness and clarity; additionally, we validate LLMs’ capability as automated evaluators of rule explanations. Our work advances rule transparency and enables human-interpretable, interactive knowledge graph reasoning. All code and datasets are publicly released.

Technology Category

Application Category

📝 Abstract
Knowledge graphs (KGs) often contain sufficient information to support the inference of new facts. Identifying logical rules not only improves the completeness of a knowledge graph but also enables the detection of potential errors, reveals subtle data patterns, and enhances the overall capacity for reasoning and interpretation. However, the complexity of such rules, combined with the unique labeling conventions of each KG, can make them difficult for humans to understand. In this paper, we explore the potential of large language models to generate natural language explanations for logical rules. Specifically, we extract logical rules using the AMIE 3.5.1 rule discovery algorithm from the benchmark dataset FB15k-237 and two large-scale datasets, FB-CVT-REV and FB+CVT-REV. We examine various prompting strategies, including zero- and few-shot prompting, including variable entity types, and chain-of-thought reasoning. We conduct a comprehensive human evaluation of the generated explanations based on correctness, clarity, and hallucination, and also assess the use of large language models as automatic judges. Our results demonstrate promising performance in terms of explanation correctness and clarity, although several challenges remain for future research. All scripts and data used in this study are publicly available at https://github.com/idirlab/KGRule2NL}{https://github.com/idirlab/KGRule2NL.
Problem

Research questions and friction points this paper is trying to address.

Generating natural language explanations for complex logical rules in knowledge graphs
Improving human understanding of KG rules via large language models
Evaluating explanation correctness and clarity using human and automated methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses AMIE 3.5.1 for logical rule extraction
Explores LLMs for natural language explanations
Evaluates explanations via human and automatic judges
🔎 Similar Papers
No similar papers found.
N
Nasim Shirvani-Mahdavi
University of Texas at Arlington, 701 S Nedderman Dr, Arlington, TX, 76019, USA
D
Devin Wingfield
University of Texas at Arlington, 701 S Nedderman Dr, Arlington, TX, 76019, USA
A
Amin Ghasemi
University of Texas at Arlington, 701 S Nedderman Dr, Arlington, TX, 76019, USA
Chengkai Li
Chengkai Li
Professor of Computer Science and Engineering, The University of Texas at Arlington
Big Data & Data ScienceComputational JournalismData-Driven Fact-CheckingNatural Language Processing