🤖 AI Summary
Existing differentiable Inductive Logic Programming (ILP) approaches are largely restricted to chain-like rules, limiting their ability to model complex logical structures—such as branching and cyclic patterns—prevalent in knowledge graphs, thereby constraining both expressiveness and interpretability. To address this, we propose graph-structured rules, enabling first-order logical rules of arbitrary topology. We design a compact rule search space grounded in the number of free variables, facilitating explicit rule derivation and end-to-end joint optimization. Furthermore, we introduce a differentiable message-passing inference framework that continuously relaxes logical reasoning. On knowledge graph completion, our method substantially outperforms state-of-the-art differentiable rule learners, achieves performance on par with black-box embedding methods, exhibits strong robustness to data noise, and yields learned rules with high predictive accuracy and clear semantic interpretability.
📝 Abstract
Differentiable inductive logic programming (ILP) techniques have proven effective at finding approximate rule-based solutions to link prediction and node classification problems on knowledge graphs; however, the common assumption of chain-like rule structure can hamper the performance and interpretability of existing approaches. We introduce GLIDR, a differentiable rule learning method that models the inference of logic rules with more expressive syntax than previous methods. GLIDR uses a differentiable message passing inference algorithm that generalizes previous chain-like rule learning methods to allow rules with features like branches and cycles. GLIDR has a simple and expressive rule search space which is parameterized by a limit on the maximum number of free variables that may be included in a rule. Explicit logic rules can be extracted from the weights of a GLIDR model for use with symbolic solvers. We demonstrate that GLIDR can significantly outperform existing rule learning methods on knowledge graph completion tasks and even compete with embedding methods despite the inherent disadvantage of being a structure-only prediction method. We show that rules extracted from GLIDR retain significant predictive performance, and that GLIDR is highly robust to training data noise. Finally, we demonstrate that GLIDR can be chained with deep neural networks and optimized end-to-end for rule learning on arbitrary data modalities.