🤖 AI Summary
In knowledge graph completion (KGC), factorization models (e.g., DistMult) achieve strong transductive performance but exhibit limited inductive generalization, whereas graph neural networks (GNNs) excel at node feature modeling yet struggle to directly encode relational structure. This paper introduces the first message-passing reformulation of factorization models, unifying DistMult and other factorization machine (FM)-style models into a differentiable GNN architecture—thereby establishing a theoretical bridge between FMs and GNNs. Our approach models triple-wise interactions via relation-aware message aggregation, preserving transductive accuracy competitive with state-of-the-art FM baselines while achieving new inductive KGC SOTA performance. Crucially, it reduces parameter count by an order of magnitude, significantly improving generalization capability and computational efficiency.
📝 Abstract
Factorisation-based Models (FMs), such as DistMult, have enjoyed enduring success for Knowledge Graph Completion (KGC) tasks, often outperforming Graph Neural Networks (GNNs). However, unlike GNNs, FMs struggle to incorporate node features and generalise to unseen nodes in inductive settings. Our work bridges the gap between FMs and GNNs by proposing ReFactor GNNs. This new architecture draws upon both modelling paradigms, which previously were largely thought of as disjoint. Concretely, using a message-passing formalism, we show how FMs can be cast as GNNs by reformulating the gradient descent procedure as message-passing operations, which forms the basis of our ReFactor GNNs. Across a multitude of well-established KGC benchmarks, our ReFactor GNNs achieve comparable transductive performance to FMs, and state-of-the-art inductive performance while using an order of magnitude fewer parameters.