🤖 AI Summary
Graph Neural Networks (GNNs) often suffer from slow convergence and suboptimal performance due to random or information-poor initial node features. To address this, we propose Structure-Aware One-Shot Graph Encoding Embedding (GEE), a lightweight, learnable initialization scheme that generates high-quality structural node embeddings for GNNs, forming the GEE-powered GNN (GG) framework and its classification variant GG-C. For the first time, we systematically establish—grounded in statistical principles—the critical role of initialization embeddings in determining GNN performance. GG employs end-to-end training to enable adaptive fusion and optimization of structural and task-specific features under both unsupervised and supervised settings. Extensive experiments on multiple real-world graph benchmarks demonstrate that GG achieves state-of-the-art (SOTA) results in node clustering while converging significantly faster; GG-C consistently outperforms mainstream baselines in node classification.
📝 Abstract
Graph neural networks (GNNs) have emerged as a powerful framework for a wide range of node-level graph learning tasks. However, their performance is often constrained by reliance on random or minimally informed initial feature representations, which can lead to slow convergence and suboptimal solutions. In this paper, we leverage a statistically grounded method, one-hot graph encoder embedding (GEE), to generate high-quality initial node features that enhance the end-to-end training of GNNs. We refer to this integrated framework as the GEE-powered GNN (GG), and demonstrate its effectiveness through extensive simulations and real-world experiments across both unsupervised and supervised settings. In node clustering, GG consistently achieves state-of-the-art performance, ranking first across all evaluated real-world datasets, while exhibiting faster convergence compared to the standard GNN. For node classification, we further propose an enhanced variant, GG-C, which concatenates the outputs of GG and GEE and outperforms competing baselines. These results confirm the importance of principled, structure-aware feature initialization in realizing the full potential of GNNs.