🤖 AI Summary
Graph Neural Networks (GNNs) suffer from long-range information attenuation due to iterative neighborhood aggregation, leading to over-smoothing and degraded generalization.
Method: We conduct a rigorous asymptotic analysis of GCN node classification under the Contextual Stochastic Block Model (CSBM), integrating the replica method with constrained dynamic mean-field theory (DMFT) to derive a continuous-limit model for deep GCNs. This enables principled characterization of architectural scaling laws—e.g., width scaling quadratically with depth—to mitigate over-smoothing.
Contribution/Results: We prove that, under appropriate scaling, the test error of deep GCNs asymptotically converges to the Bayes-optimal error; conversely, improper scaling induces over-smoothing. This work establishes the first statistically grounded, physics-inspired asymptotic framework for deep GNNs and yields verifiable design principles—such as depth-aware width scaling—for achieving optimal generalization.
📝 Abstract
Graph neural networks (GNNs) are designed to process data associated with graphs. They are finding an increasing range of applications; however, as with other modern machine learning techniques, their theoretical understanding is limited. GNNs can encounter difficulties in gathering information from nodes that are far apart by iterated aggregation steps. This situation is partly caused by so-called oversmoothing; and overcoming it is one of the practically motivated challenges. We consider the situation where information is aggregated by multiple steps of convolution, leading to graph convolutional networks (GCNs). We analyze the generalization performance of a basic GCN, trained for node classification on data generated by the contextual stochastic block model. We predict its asymptotic performance by deriving the free energy of the problem, using the replica method, in the high-dimensional limit. Calling depth the number of convolutional steps, we show the importance of going to large depth to approach the Bayes-optimality. We detail how the architecture of the GCN has to scale with the depth to avoid oversmoothing. The resulting large depth limit can be close to the Bayes-optimality and leads to a continuous GCN. Technically, we tackle this continuous limit via an approach that resembles dynamical mean-field theory (DMFT) with constraints at the initial and final times. An expansion around large regularization allows us to solve the corresponding equations for the performance of the deep GCN. This promising tool may contribute to the analysis of further deep neural networks.