🤖 AI Summary
To address the heavy reliance on labeled data for training graph neural networks (GNNs) in link prediction, this paper proposes Parameter-Free Message Passing (PFMP), which entirely eliminates learnable feature transformation parameters in message-passing neural networks (MPNNs) and performs information propagation solely via graph topology and initial high-dimensional node features. We systematically demonstrate—across multiple benchmark datasets—that PFMP matches or surpasses fully parameterized models such as GCN and GAT. Theoretically, we show that PFMP implicitly captures path-based topological similarity, yielding three key advantages: (i) high efficiency—with 3–5× inference speedup; (ii) zero training overhead; and (iii) strong interpretability. Notably, PFMP exhibits significant performance gains in settings with high-dimensional node features, establishing a novel low-resource paradigm for graph learning.
📝 Abstract
Message passing neural networks (MPNNs) operate on graphs by exchanging information between neigbouring nodes. MPNNs have been successfully applied to various node-, edge-, and graph-level tasks in areas like molecular science, computer vision, natural language processing, and combinatorial optimization. However, most MPNNs require training on large amounts of labeled data, which can be costly and time-consuming. In this work, we explore the use of various untrained message passing layers in graph neural networks, i.e. variants of popular message passing architecture where we remove all trainable parameters that are used to transform node features in the message passing step. Focusing on link prediction, we find that untrained message passing layers can lead to competitive and even superior performance compared to fully trained MPNNs, especially in the presence of high-dimensional features. We provide a theoretical analysis of untrained message passing by relating the inner products of features implicitly produced by untrained message passing layers to path-based topological node similarity measures. As such, untrained message passing architectures can be viewed as a highly efficient and interpretable approach to link prediction.