🤖 AI Summary
Zero-shot cross-domain knowledge graph completion (KGC) suffers from weak expressiveness and difficulty in jointly predicting unseen entities and relations. Method: We propose the first fully inductive KGC framework, featuring a unified triplet embedding architecture with theoretically superior expressiveness over existing models, and an inductive graph neural network based on high-order tensor interaction and learnable structural encoding—requiring no target-domain prior knowledge or fine-tuning. Contribution/Results: This work achieves, for the first time, end-to-end joint modeling of entities and relations under fully inductive settings. It significantly improves zero-shot generalization, outperforming state-of-the-art fully inductive models across multiple benchmarks, and surpasses large-context language models on cross-domain prediction tasks. The code is publicly available.
📝 Abstract
Fully inductive knowledge graph models can be trained on multiple domains and subsequently perform zero-shot knowledge graph completion (KGC) in new unseen domains. This is an important capability towards the goal of having foundation models for knowledge graphs. In this work, we introduce a more expressive and capable fully inductive model, dubbed TRIX, which not only yields strictly more expressive triplet embeddings (head entity, relation, tail entity) compared to state-of-the-art methods, but also introduces a new capability: directly handling both entity and relation prediction tasks in inductive settings. Empirically, we show that TRIX outperforms the state-of-the-art fully inductive models in zero-shot entity and relation predictions in new domains, and outperforms large-context LLMs in out-of-domain predictions. The source code is available at https://github.com/yuchengz99/TRIX.