🤖 AI Summary
Deep graph neural networks (GNNs) often suffer from signal degradation, leading to performance deterioration with increasing depth. Method: Motivated by signal propagation analysis, we propose the Graph Embedding Variation (GEV) metric—the first quantitative measure to expose the inherent limitation of existing initialization schemes in jointly regulating forward/backward propagation and GEV. Based on this insight, we introduce SPoGInit, a systematic initialization framework integrating signal propagation theory, variance-aware weight optimization, GCN-specific design, and graph-structure-aware adaptation. Contribution/Results: Extensive experiments demonstrate that SPoGInit significantly enhances training stability and convergence accuracy for deep GNNs. It consistently outperforms standard baselines—including Xavier and He initializations—across diverse tasks and architectures (e.g., GCN, GAT, GIN), enabling sustained performance gains as network depth increases.
📝 Abstract
Graph Neural Networks (GNNs) often suffer from performance degradation as the network depth increases. This paper addresses this issue by introducing initialization methods that enhance signal propagation (SP) within GNNs. We propose three key metrics for effective SP in GNNs: forward propagation, backward propagation, and graph embedding variation (GEV). While the first two metrics derive from classical SP theory, the third is specifically designed for GNNs. We theoretically demonstrate that a broad range of commonly used initialization methods for GNNs, which exhibit performance degradation with increasing depth, fail to control these three metrics simultaneously. To deal with this limitation, a direct exploitation of the SP analysis--searching for weight initialization variances that optimize the three metrics--is shown to significantly enhance the SP in deep GCNs. This approach is called Signal Propagation on Graph-guided Initialization (SPoGInit). Our experiments demonstrate that SPoGInit outperforms commonly used initialization methods on various tasks and architectures. Notably, SPoGInit enables performance improvements as GNNs deepen, which represents a significant advancement in addressing depth-related challenges and highlights the validity and effectiveness of the SP analysis framework.