🤖 AI Summary
This work investigates operator-level convergence and transferability of Graph Neural Networks (GNNs) under the graphon framework—the continuum limit of graph sequences. Specifically, it addresses the setting where sampled graphs’ spectra converge to a graphon. We establish, for the first time, theoretical upper bounds on the convergence rate of graph neural operators, quantitatively characterizing the trade-off among three regularity regimes: no smoothness assumption, global Lipschitz continuity, and piecewise Lipschitz continuity. Stronger smoothness assumptions yield faster convergence rates but at the cost of reduced model expressivity. Methodologically, we integrate spectral graph theory with graphon analysis, delivering rigorous theoretical derivations. Empirical validation on both synthetic and real-world graph datasets confirms the tightness and practical relevance of our bounds. Our results provide the first asymptotic theoretical foundation for GNN generalization across graphs, featuring explicit, rate-dependent guarantees.
📝 Abstract
Graphons, as limits of graph sequences, provide a framework for analyzing the asymptotic behavior of graph neural operators. Spectral convergence of sampled graphs to graphons yields operator-level convergence rates, enabling transferability analyses of GNNs. This note summarizes known bounds under no assumptions, global Lipschitz continuity, and piecewise-Lipschitz continuity, highlighting tradeoffs between assumptions and rates, and illustrating their empirical tightness on synthetic and real data.