Improving Expressive Power of Spectral Graph Neural Networks with Eigenvalue Correction

📅 2024-01-28
🏛️ AAAI Conference on Artificial Intelligence
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
Spectral graph neural networks (GNNs) suffer from limited expressive power and degraded fitting performance due to repeated eigenvalues of the normalized Laplacian matrix. This work theoretically establishes, for the first time, that the number of *distinguishable* eigenvalues fundamentally bounds the expressive capacity of spectral GNNs. To address this, we propose an unsupervised eigenvalue correction mechanism that applies controlled perturbations and eigenvalue redistribution to break eigenvalue clustering and enhance eigenvalue distinguishability. We further design a compatible polynomial graph convolutional filter leveraging the corrected spectrum. Extensive experiments on synthetic graphs and multiple real-world benchmark datasets demonstrate that our method significantly improves node classification accuracy, while enhancing model generalization and robustness—thereby overcoming spectral GNNs’ inherent dependence on favorable eigenvalue distributions.

Technology Category

Application Category

📝 Abstract
In recent years, spectral graph neural networks, characterized by polynomial filters, have garnered increasing attention and have achieved remarkable performance in tasks such as node classification. These models typically assume that eigenvalues for the normalized Laplacian matrix are distinct from each other, thus expecting a polynomial filter to have a high fitting ability. However, this paper empirically observes that normalized Laplacian matrices frequently possess repeated eigenvalues. Moreover, we theoretically establish that the number of distinguishable eigenvalues plays a pivotal role in determining the expressive power of spectral graph neural networks. In light of this observation, we propose an eigenvalue correction strategy that can free polynomial filters from the constraints of repeated eigenvalue inputs. Concretely, the proposed eigenvalue correction strategy enhances the uniform distribution of eigenvalues, thus mitigating repeated eigenvalues, and improving the fitting capacity and expressive power of polynomial filters. Extensive experimental results on both synthetic and real-world datasets demonstrate the superiority of our method.
Problem

Research questions and friction points this paper is trying to address.

Enhancing spectral graph neural networks
Addressing repeated eigenvalues issue
Improving polynomial filters expressive power
Innovation

Methods, ideas, or system contributions that make the work stand out.

Eigenvalue correction strategy
Uniform eigenvalue distribution
Enhanced polynomial filter fitting
🔎 Similar Papers
No similar papers found.
K
Kangkang Lu
Beijing University of Posts and Telecommunications
Y
Yanhua Yu
Beijing University of Posts and Telecommunications
Hao Fei
Hao Fei
National University of Singapore
Vision and LanguageLarge Language ModelNatural Language ProcessingWorld Modeling
X
Xuan Li
Beijing University of Posts and Telecommunications
Z
Zixuan Yang
Beijing University of Posts and Telecommunications
Zirui Guo
Zirui Guo
Beijing University of Posts and Telecommunications
Contrastive learningGraph representation learningRecommendation
M
Meiyu Liang
Beijing University of Posts and Telecommunications
M
Mengran Yin
Beijing University of Posts and Telecommunications
T
Tat-Seng Chua
National University of Singapore