🤖 AI Summary
Existing analyses of message-passing graph neural networks (MPNNs) for node and link prediction often rely on unrealistic i.i.d. assumptions, neglecting the critical roles of graph topology, aggregation mechanisms, and loss functions in out-of-distribution inductive generalization. Method: We propose the first unified theoretical framework that systematically models node- and link-level dependencies, supports both inductive and transductive learning, and quantifies—analytically and for the first time—the intrinsic impact of graph topology on generalization error. The framework is agnostic to specific aggregation functions and loss designs. Contribution/Results: Through rigorous theoretical analysis and empirical validation, our framework significantly advances understanding of MPNN generalization behavior, revealing how structural properties govern predictive robustness. It yields interpretable, principled design guidelines for robust graph representation learning, bridging a key gap between theory and practice in geometric deep learning.
📝 Abstract
Using message-passing graph neural networks (MPNNs) for node and link prediction is crucial in various scientific and industrial domains, which has led to the development of diverse MPNN architectures. Besides working well in practical settings, their ability to generalize beyond the training set remains poorly understood. While some studies have explored MPNNs' generalization in graph-level prediction tasks, much less attention has been given to node- and link-level predictions. Existing works often rely on unrealistic i.i.d.@ assumptions, overlooking possible correlations between nodes or links, and assuming fixed aggregation and impractical loss functions while neglecting the influence of graph structure. In this work, we introduce a unified framework to analyze the generalization properties of MPNNs in inductive and transductive node and link prediction settings, incorporating diverse architectural parameters and loss functions and quantifying the influence of graph structure. Additionally, our proposed generalization framework can be applied beyond graphs to any classification task under the inductive or transductive setting. Our empirical study supports our theoretical insights, deepening our understanding of MPNNs' generalization capabilities in these tasks.