🤖 AI Summary
This work addresses inductive knowledge graph completion—where test entities are unseen during training—while avoiding the high computational overhead and complex hyperparameter tuning associated with explicit path encoding. We propose a Transformer-based subgraph encoder that implicitly captures relational path semantics without enumerating paths, via a novel connectivity-biased self-attention mechanism and entity-role embeddings. This design significantly reduces inference latency and hyperparameter optimization cost. Experiments on standard inductive KG completion benchmarks demonstrate that our approach outperforms existing path-agnostic models and matches or surpasses state-of-the-art path-enhanced methods. Moreover, we validate its generalization capability on transductive relation prediction tasks. Overall, our method achieves strong performance with improved efficiency and reduced engineering complexity.
📝 Abstract
Knowledge graph (KG) completion aims to identify additional facts that can be inferred from the existing facts in the KG. Recent developments in this field have explored this task in the inductive setting, where at test time one sees entities that were not present during training; the most performant models in the inductive setting have employed path encoding modules in addition to standard subgraph encoding modules. This work similarly focuses on KG completion in the inductive setting, without the explicit use of path encodings, which can be time-consuming and introduces several hyperparameters that require costly hyperparameter optimization. Our approach uses a Transformer-based subgraph encoding module only; we introduce connection-biased attention and entity role embeddings into the subgraph encoding module to eliminate the need for an expensive and time-consuming path encoding module. Evaluations on standard inductive KG completion benchmark datasets demonstrate that our extbf{C}onnection- extbf{B}iased extbf{Li}nk extbf{P}rediction (CBLiP) model has superior performance to models that do not use path information. Compared to models that utilize path information, CBLiP shows competitive or superior performance while being faster. Additionally, to show that the effectiveness of connection-biased attention and entity role embeddings also holds in the transductive setting, we compare CBLiP's performance on the relation prediction task in the transductive setting.