🤖 AI Summary
Traditional message-passing neural networks (MPNNs) suffer from oversmoothing with increasing depth and struggle to jointly capture global and local graph structures. To address this, we propose a Laplacian eigenvector learning-based pretraining framework for graph neural networks: it employs a self-supervised task that predicts low-frequency eigenvectors of the graph Laplacian matrix, enabling implicit modeling of spectral structural properties in an unsupervised manner. Our method is inherently structure-aware, domain-agnostic, and supports inductive generalization; moreover, it accommodates synthetic node features when real features are sparse. Extensive experiments demonstrate that models pretrained with our framework significantly outperform baselines across diverse downstream graph tasks—including node classification, link prediction, and graph classification—exhibiting superior structural understanding and cross-domain generalization capability.
📝 Abstract
We propose a novel framework for pre-training Graph Neural Networks (GNNs) by inductively learning Laplacian eigenvectors. Traditional Message Passing Neural Networks (MPNNs) often struggle to capture global and regional graph structure due to over-smoothing risk as network depth increases. Because the low-frequency eigenvectors of the graph Laplacian matrix encode global information, pre-training GNNs to predict these eigenvectors encourages the network to naturally learn large-scale structural patterns over each graph. Empirically, we show that models pre-trained via our framework outperform baseline models on a variety of graph structure-based tasks. While most existing pre-training methods focus on domain-specific tasks like node or edge feature reconstruction, our self-supervised pre-training framework is structure-based and highly flexible. Eigenvector-learning can be applied to all graph-based datasets, and can be used with synthetic features when task-specific data is sparse.