GLIDR: Graph-Like Inductive Logic Programming with Differentiable Reasoning

📅 2025-08-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing differentiable Inductive Logic Programming (ILP) approaches are largely restricted to chain-like rules, limiting their ability to model complex logical structures—such as branching and cyclic patterns—prevalent in knowledge graphs, thereby constraining both expressiveness and interpretability. To address this, we propose graph-structured rules, enabling first-order logical rules of arbitrary topology. We design a compact rule search space grounded in the number of free variables, facilitating explicit rule derivation and end-to-end joint optimization. Furthermore, we introduce a differentiable message-passing inference framework that continuously relaxes logical reasoning. On knowledge graph completion, our method substantially outperforms state-of-the-art differentiable rule learners, achieves performance on par with black-box embedding methods, exhibits strong robustness to data noise, and yields learned rules with high predictive accuracy and clear semantic interpretability.

Technology Category

Application Category

📝 Abstract
Differentiable inductive logic programming (ILP) techniques have proven effective at finding approximate rule-based solutions to link prediction and node classification problems on knowledge graphs; however, the common assumption of chain-like rule structure can hamper the performance and interpretability of existing approaches. We introduce GLIDR, a differentiable rule learning method that models the inference of logic rules with more expressive syntax than previous methods. GLIDR uses a differentiable message passing inference algorithm that generalizes previous chain-like rule learning methods to allow rules with features like branches and cycles. GLIDR has a simple and expressive rule search space which is parameterized by a limit on the maximum number of free variables that may be included in a rule. Explicit logic rules can be extracted from the weights of a GLIDR model for use with symbolic solvers. We demonstrate that GLIDR can significantly outperform existing rule learning methods on knowledge graph completion tasks and even compete with embedding methods despite the inherent disadvantage of being a structure-only prediction method. We show that rules extracted from GLIDR retain significant predictive performance, and that GLIDR is highly robust to training data noise. Finally, we demonstrate that GLIDR can be chained with deep neural networks and optimized end-to-end for rule learning on arbitrary data modalities.
Problem

Research questions and friction points this paper is trying to address.

Enhances rule-based solutions for knowledge graph tasks
Generalizes chain-like rules to support branches and cycles
Combines differentiable learning with symbolic logic rules
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentiable message passing inference algorithm
Expressive rule search space with free variables
End-to-end optimization with deep neural networks
🔎 Similar Papers
No similar papers found.
B
Blair Johnson
Georgia Institute of Technology
C
Clayton Kerce
Georgia Institute of Technology
Faramarz Fekri
Faramarz Fekri
Georgia Tech
Information TheoryWireless CommunicationNeuro-Symbolic AIGraphical ModelsReinforcement Learning