Link Prediction with Untrained Message Passing Layers

📅 2024-06-24
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the heavy reliance on labeled data for training graph neural networks (GNNs) in link prediction, this paper proposes Parameter-Free Message Passing (PFMP), which entirely eliminates learnable feature transformation parameters in message-passing neural networks (MPNNs) and performs information propagation solely via graph topology and initial high-dimensional node features. We systematically demonstrate—across multiple benchmark datasets—that PFMP matches or surpasses fully parameterized models such as GCN and GAT. Theoretically, we show that PFMP implicitly captures path-based topological similarity, yielding three key advantages: (i) high efficiency—with 3–5× inference speedup; (ii) zero training overhead; and (iii) strong interpretability. Notably, PFMP exhibits significant performance gains in settings with high-dimensional node features, establishing a novel low-resource paradigm for graph learning.

Technology Category

Application Category

📝 Abstract
Message passing neural networks (MPNNs) operate on graphs by exchanging information between neigbouring nodes. MPNNs have been successfully applied to various node-, edge-, and graph-level tasks in areas like molecular science, computer vision, natural language processing, and combinatorial optimization. However, most MPNNs require training on large amounts of labeled data, which can be costly and time-consuming. In this work, we explore the use of various untrained message passing layers in graph neural networks, i.e. variants of popular message passing architecture where we remove all trainable parameters that are used to transform node features in the message passing step. Focusing on link prediction, we find that untrained message passing layers can lead to competitive and even superior performance compared to fully trained MPNNs, especially in the presence of high-dimensional features. We provide a theoretical analysis of untrained message passing by relating the inner products of features implicitly produced by untrained message passing layers to path-based topological node similarity measures. As such, untrained message passing architectures can be viewed as a highly efficient and interpretable approach to link prediction.
Problem

Research questions and friction points this paper is trying to address.

Reducing training costs for graph neural networks
Exploring untrained message passing for link prediction
Analyzing theoretical connections to topological similarity measures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Untrained message passing layers replace trainable parameters
Untrained layers achieve competitive link prediction performance
Untrained message passing relates to path-based node similarity
🔎 Similar Papers
No similar papers found.
L
Lisi Qarkaxhija
Chair of Machine Learning for Complex Networks, Center for Artificial Intelligence and Data Science (CAIDAS), Julius-Maximilians-Universität Würzburg, DE
A
Anatol E. Wegner
Chair of Machine Learning for Complex Networks, Center for Artificial Intelligence and Data Science (CAIDAS), Julius-Maximilians-Universität Würzburg, DE
Ingo Scholtes
Ingo Scholtes
Professor of Machine Learning for Complex Networks at University of Würzburg
graph learningnetwork sciencestatistical relational learningcausal MLsoftware engineering