🤖 AI Summary
Deep graph neural networks (GNNs) suffer from severe performance degradation due to over-smoothing—where node representations converge and lose discriminability. To address this, we introduce multiplicative ergodic theory into GNN over-smoothing analysis for the first time, establishing an explicit framework that characterizes the convergence rate of node similarity. We rigorously derive asymptotic over-smoothing rates for GNNs both with and without residual connections, proving that residual connections exponentially suppress—or even eliminate—over-smoothing under broad parameter regimes. Our theoretical analysis unifies multiple mainstream GNN architectures and is grounded in random matrix theory and normalized similarity modeling. Extensive numerical experiments on benchmark graph datasets validate the quantitative accuracy of our theoretical predictions. This work provides the first general, quantitatively precise theoretical tool for understanding representational degeneration in deep GNNs.
📝 Abstract
Graph neural networks (GNNs) have achieved remarkable empirical success in processing and representing graph-structured data across various domains. However, a significant challenge known as"oversmoothing"persists, where vertex features become nearly indistinguishable in deep GNNs, severely restricting their expressive power and practical utility. In this work, we analyze the asymptotic oversmoothing rates of deep GNNs with and without residual connections by deriving explicit convergence rates for a normalized vertex similarity measure. Our analytical framework is grounded in the multiplicative ergodic theorem. Furthermore, we demonstrate that adding residual connections effectively mitigates or prevents oversmoothing across several broad families of parameter distributions. The theoretical findings are strongly supported by numerical experiments.