Neighborhood Sampling Does Not Learn the Same Graph Neural Network

๐Ÿ“… 2025-09-26
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the unresolved question of how neighborhood sampling methods systematically affect large-scale graph neural network (GNN) training. We introduce the first unified analytical framework grounded in Neural Tangent Kernel (NTK) and Gaussian Process (GP) theory to model and characterize the impact of diverse sampling strategies on training dynamics and posterior distributions. Our analysis reveals that, while all sampling schemes converge to the same GP posterior as the sample size tends to infinity, they induce markedly distinct posterior covariance structures under finite-sample regimesโ€”leading to incomparable lower bounds on prediction error. Consequently, no universally optimal sampling strategy exists. This work establishes the first theoretical link between sampling behavior and generalization performance in GNNs, providing a principled, NTK/GP-based foundation for sampling design and analysis.

Technology Category

Application Category

๐Ÿ“ Abstract
Neighborhood sampling is an important ingredient in the training of large-scale graph neural networks. It suppresses the exponential growth of the neighborhood size across network layers and maintains feasible memory consumption and time costs. While it becomes a standard implementation in practice, its systemic behaviors are less understood. We conduct a theoretical analysis by using the tool of neural tangent kernels, which characterize the (analogous) training dynamics of neural networks based on their infinitely wide counterparts -- Gaussian processes (GPs). We study several established neighborhood sampling approaches and the corresponding posterior GP. With limited samples, the posteriors are all different, although they converge to the same one as the sample size increases. Moreover, the posterior covariance, which lower-bounds the mean squared prediction error, is uncomparable, aligning with observations that no sampling approach dominates.
Problem

Research questions and friction points this paper is trying to address.

Analyzes systemic behaviors of neighborhood sampling in GNN training
Compares posterior Gaussian processes under different sampling approaches
Reveals no sampling method dominates due to incomparable covariances
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neighborhood sampling reduces exponential growth of neighborhoods
Theoretical analysis uses neural tangent kernels and Gaussian processes
Different sampling methods yield incomparable posterior covariances
๐Ÿ”Ž Similar Papers
2023-10-05Trans. Mach. Learn. Res.Citations: 6