🤖 AI Summary
To address the lack of unified temporal validity modeling, difficulty in representing infinite validity periods, and insufficient exploitation of temporal information in temporal hyper-relational knowledge graphs (KGs), this paper proposes a general time representation method compatible with four temporal validity types: since, until, period, and time-invariant. It is the first to enable joint differentiable modeling of both time points and time intervals. Building upon a hyper-relational KG architecture, we design an end-to-end neural model that integrates temporal embeddings, interval encoding, and adaptive interval learning. Evaluated on multiple temporal link prediction tasks, our approach consistently outperforms state-of-the-art baselines—achieving up to a 75.3% improvement in performance—while significantly enhancing reasoning over long-duration and infinitely valid facts. This work establishes a novel paradigm for temporal semantic modeling in temporal knowledge graphs.
📝 Abstract
Knowledge graphs (KGs) have become an effective paradigm for managing real-world facts, which are not only complex but also dynamically evolve over time. The temporal validity of facts often serves as a strong clue in downstream link prediction tasks, which predicts a missing element in a fact. Traditional link prediction techniques on temporal KGs either consider a sequence of temporal snapshots of KGs with an ad-hoc defined time interval or expand a temporal fact over its validity period under a predefined time granularity; these approaches not only suffer from the sensitivity of the selection of time interval/granularity, but also face the computational challenges when handling facts with long (even infinite) validity. Although the recent hyper-relational KGs represent the temporal validity of a fact as qualifiers describing the fact, it is still suboptimal due to its ignorance of the infinite validity of some facts and the insufficient information encoded from the qualifiers about the temporal validity. Against this background, we propose VITA, a $underline{V}$ersatile t$underline{I}$me represen$underline{TA}$tion learning method for temporal hyper-relational knowledge graphs. We first propose a versatile time representation that can flexibly accommodate all four types of temporal validity of facts (i.e., since, until, period, time-invariant), and then design VITA to effectively learn the time information in both aspects of time value and timespan to boost the link prediction performance. We conduct a thorough evaluation of VITA compared to a sizable collection of baselines on real-world KG datasets. Results show that VITA outperforms the best-performing baselines in various link prediction tasks (predicting missing entities, relations, time, and other numeric literals) by up to 75.3%. Ablation studies and a case study also support our key design choices.