Statistical physics analysis of graph neural networks: Approaching optimality in the contextual stochastic block model

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph Neural Networks (GNNs) suffer from long-range information attenuation due to iterative neighborhood aggregation, leading to over-smoothing and degraded generalization. Method: We conduct a rigorous asymptotic analysis of GCN node classification under the Contextual Stochastic Block Model (CSBM), integrating the replica method with constrained dynamic mean-field theory (DMFT) to derive a continuous-limit model for deep GCNs. This enables principled characterization of architectural scaling laws—e.g., width scaling quadratically with depth—to mitigate over-smoothing. Contribution/Results: We prove that, under appropriate scaling, the test error of deep GCNs asymptotically converges to the Bayes-optimal error; conversely, improper scaling induces over-smoothing. This work establishes the first statistically grounded, physics-inspired asymptotic framework for deep GNNs and yields verifiable design principles—such as depth-aware width scaling—for achieving optimal generalization.

Technology Category

Application Category

📝 Abstract
Graph neural networks (GNNs) are designed to process data associated with graphs. They are finding an increasing range of applications; however, as with other modern machine learning techniques, their theoretical understanding is limited. GNNs can encounter difficulties in gathering information from nodes that are far apart by iterated aggregation steps. This situation is partly caused by so-called oversmoothing; and overcoming it is one of the practically motivated challenges. We consider the situation where information is aggregated by multiple steps of convolution, leading to graph convolutional networks (GCNs). We analyze the generalization performance of a basic GCN, trained for node classification on data generated by the contextual stochastic block model. We predict its asymptotic performance by deriving the free energy of the problem, using the replica method, in the high-dimensional limit. Calling depth the number of convolutional steps, we show the importance of going to large depth to approach the Bayes-optimality. We detail how the architecture of the GCN has to scale with the depth to avoid oversmoothing. The resulting large depth limit can be close to the Bayes-optimality and leads to a continuous GCN. Technically, we tackle this continuous limit via an approach that resembles dynamical mean-field theory (DMFT) with constraints at the initial and final times. An expansion around large regularization allows us to solve the corresponding equations for the performance of the deep GCN. This promising tool may contribute to the analysis of further deep neural networks.
Problem

Research questions and friction points this paper is trying to address.

Analyzes GNN performance in node classification tasks.
Explores overcoming oversmoothing in deep graph convolutional networks.
Derives conditions for GCNs to approach Bayes-optimal performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses replica method for asymptotic performance analysis
Applies dynamical mean-field theory with constraints
Expands around large regularization for deep GCN solutions
🔎 Similar Papers
No similar papers found.
D
Duranthon
Statistical physics of computation laboratory, École polytechnique fédérale de Lausanne, Switzerland
Lenka Zdeborová
Lenka Zdeborová
EPFL, Switzerland
statistical physicslearning theoryphase transitionsdeep learninghigh-dimensional statistics