Learning Laplacian Eigenvectors: a Pre-training Method for Graph Neural Networks

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional message-passing neural networks (MPNNs) suffer from oversmoothing with increasing depth and struggle to jointly capture global and local graph structures. To address this, we propose a Laplacian eigenvector learning-based pretraining framework for graph neural networks: it employs a self-supervised task that predicts low-frequency eigenvectors of the graph Laplacian matrix, enabling implicit modeling of spectral structural properties in an unsupervised manner. Our method is inherently structure-aware, domain-agnostic, and supports inductive generalization; moreover, it accommodates synthetic node features when real features are sparse. Extensive experiments demonstrate that models pretrained with our framework significantly outperform baselines across diverse downstream graph tasks—including node classification, link prediction, and graph classification—exhibiting superior structural understanding and cross-domain generalization capability.

Technology Category

Application Category

📝 Abstract
We propose a novel framework for pre-training Graph Neural Networks (GNNs) by inductively learning Laplacian eigenvectors. Traditional Message Passing Neural Networks (MPNNs) often struggle to capture global and regional graph structure due to over-smoothing risk as network depth increases. Because the low-frequency eigenvectors of the graph Laplacian matrix encode global information, pre-training GNNs to predict these eigenvectors encourages the network to naturally learn large-scale structural patterns over each graph. Empirically, we show that models pre-trained via our framework outperform baseline models on a variety of graph structure-based tasks. While most existing pre-training methods focus on domain-specific tasks like node or edge feature reconstruction, our self-supervised pre-training framework is structure-based and highly flexible. Eigenvector-learning can be applied to all graph-based datasets, and can be used with synthetic features when task-specific data is sparse.
Problem

Research questions and friction points this paper is trying to address.

Pre-training GNNs to learn Laplacian eigenvectors for structural patterns
Addressing MPNNs' limitations in capturing global graph structures
Providing flexible self-supervised pre-training for graph-based datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pre-training GNNs by learning Laplacian eigenvectors
Using low-frequency eigenvectors to capture global structure
Self-supervised framework applicable to all graph datasets
🔎 Similar Papers
2024-01-28AAAI Conference on Artificial IntelligenceCitations: 1
2021-12-14IEEE Transactions on Neural Networks and Learning SystemsCitations: 25