🤖 AI Summary
Existing structural knowledge graph foundation models (e.g., Ultra) rely on single-relation transformations—such as element-wise multiplication—limiting expressiveness and generalization to unseen graph structures.
Method: We propose a novel foundation model for zero-shot inductive link prediction, introducing the first multi-head geometric attention mechanism that concurrently models relation transformations in real, complex, split-complex, and dual number spaces. A lightweight entropy-regularized gating mechanism enables triplet-level adaptive fusion. Our approach integrates algebraic message passing, multi-geometric space embedding, and relation-conditioned attention.
Contribution/Results: Evaluated across 56 heterogeneous knowledge graphs, our model achieves a 5.5% absolute improvement in MRR over Ultra and a 4.4% average gain across all benchmarks, empirically validating the complementarity and generalization benefits of multi-geometric representation.
📝 Abstract
Structural knowledge graph foundation models aim to generalize reasoning to completely new graphs with unseen entities and relations. A key limitation of existing approaches like Ultra is their reliance on a single relational transformation (e.g., element-wise multiplication) in message passing, which can constrain expressiveness and fail to capture diverse relational and structural patterns exhibited on diverse graphs. In this paper, we propose Gamma, a novel foundation model that introduces multi-head geometric attention to knowledge graph reasoning. Gamma replaces the single relational transformation with multiple parallel ones, including real, complex, split-complex, and dual number based transformations, each designed to model different relational structures. A relational conditioned attention fusion mechanism then adaptively fuses them at link level via a lightweight gating with entropy regularization, allowing the model to robustly emphasize the most appropriate relational bias for each triple pattern. We present a full formalization of these algebraic message functions and discuss how their combination increases expressiveness beyond any single space. Comprehensive experiments on 56 diverse knowledge graphs demonstrate that Gamma consistently outperforms Ultra in zero-shot inductive link prediction, with a 5.5% improvement in mean reciprocal rank on the inductive benchmarks and a 4.4% improvement across all benchmarks, highlighting benefits from complementary geometric representations.