🤖 AI Summary
Existing logic rule-based knowledge graph completion methods treat rules as globally applicable with fixed confidence scores, ignoring query-specific context and thus failing to dynamically adapt rule importance. Method: This paper proposes SLogic, the first framework introducing query-dependent rule scoring: it constructs a local subgraph centered on the head entity, models contextual information via subgraph encoding and relation-aware attention, and employs a differentiable rule-matching module for end-to-end training. Contribution/Results: SLogic breaks the conventional assumption of static rule confidence, enabling query-adaptive rule weighting. It achieves state-of-the-art performance on link prediction benchmarks, significantly outperforming mainstream embedding-based and rule-based models while offering both higher accuracy and enhanced interpretability.
📝 Abstract
Logical rule-based methods offer an interpretable approach to knowledge graph completion by capturing compositional relationships in the form of human-readable inference rules. However, current approaches typically treat logical rules as universal, assigning each rule a fixed confidence score that ignores query-specific context. This is a significant limitation, as a rule's importance can vary depending on the query. To address this, we introduce SLogic (Subgraph-Informed Logical Rule learning), a novel framework that assigns query-dependent scores to logical rules. The core of SLogic is a scoring function that utilizes the subgraph centered on a query's head entity, allowing the significance of each rule to be assessed dynamically. Extensive experiments on benchmark datasets show that by leveraging local subgraph context, SLogic consistently outperforms state-of-the-art baselines, including both embedding-based and rule-based methods.