🤖 AI Summary
To address the lack of interpretability in temporal knowledge graph (TKG) forecasting, this paper proposes the first fully interpretable temporal rule learning framework. Methodologically, it introduces and mines four categories of atomic temporal rules—e.g., “if relation (r_1) holds at time (t), then (r_2) is highly likely to hold at (t+1)”—and innovatively designs a dual-factor confidence scoring mechanism that jointly models temporal novelty (recency) and statistical robustness (support frequency). Rule discovery and weighted evaluation are achieved efficiently via recursive fact analysis. Extensive experiments across nine standard TKG benchmarks demonstrate that our approach consistently outperforms eight state-of-the-art models and two baselines in predictive accuracy, while simultaneously delivering human-understandable, logically traceable prediction justifications—achieving both high performance and strong interpretability.
📝 Abstract
We address the task of temporal knowledge graph (TKG) forecasting by introducing a fully explainable method based on temporal rules. Motivated by recent work proposing a strong baseline using recurrent facts, our approach learns four simple types of rules with a confidence function that considers both recency and frequency. Evaluated on nine datasets, our method matches or surpasses the performance of eight state-of-the-art models and two baselines, while providing fully interpretable predictions.