🤖 AI Summary
Current graph learning research overemphasizes expressive power while neglecting generalization guarantees of message-passing graph neural networks (MPNNs), creating a structural gap in theoretical understanding. Method: We conduct a systematic, critical survey integrating Rademacher complexity, algorithmic stability, spectral analysis, and geometric generalization frameworks, unifying diverse analytical approaches through cross-method abstraction and theoretical synthesis. Contribution/Results: We propose the first unified taxonomy for MPNN generalization, rigorously characterizing generalization bounds and identifying key determinants—including data distribution assumptions and graph structural perturbation models. Our analysis exposes fundamental limitations of existing theory under non-i.i.d. graph data and pinpoints three open challenges. We advocate future directions centered on realistic graph distribution modeling, robustness to dynamic structural changes, and scalable generalization analysis—shifting GNN theory from “what can be represented” toward “where generalization holds.”
📝 Abstract
Message-passing graph neural networks (MPNNs) have emerged as the leading approach for machine learning on graphs, attracting significant attention in recent years. While a large set of works explored the expressivity of MPNNs, i.e., their ability to separate graphs and approximate functions over them, comparatively less attention has been directed toward investigating their generalization abilities, i.e., making meaningful predictions beyond the training data. Here, we systematically review the existing literature on the generalization abilities of MPNNs. We analyze the strengths and limitations of various studies in these domains, providing insights into their methodologies and findings. Furthermore, we identify potential avenues for future research, aiming to deepen our understanding of the generalization abilities of MPNNs.