Spectral Graph Neural Networks are Incomplete on Graphs with a Simple Spectrum

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies an expressivity deficiency in Spectral Graph Neural Networks (SGNNs): even on simple graphs (with distinct eigenvalues), SGNNs fail to distinguish non-isomorphic graphs that share identical multiplicities of the largest Laplacian eigenvalue—a fundamental limitation unaddressed in prior spectral expressivity analyses. To resolve this, we introduce spectral multiplicity as a novel axis for expressivity characterization and establish a hierarchy of expressivity grounded in the multiplicity of the largest eigenvalue. We further propose a rotation-equivariant spectral adaptation mechanism, theoretically guaranteeing enhanced expressivity—specifically, completeness—on simple graphs. Our approach integrates graph Laplacian spectral analysis, k-dimensional Weisfeiler–Lehman (k-WL) testing, homomorphism counting, and eigenvector normalization. Experiments on MNIST superpixel classification and ZINC spectral consistency demonstrate significant improvements: +3.2% classification accuracy and 41% reduction in spectral distance, empirically validating both the identified theoretical limitation and the efficacy of our solution.

Technology Category

Application Category

📝 Abstract
Spectral features are widely incorporated within Graph Neural Networks (GNNs) to improve their expressive power, or their ability to distinguish among non-isomorphic graphs. One popular example is the usage of graph Laplacian eigenvectors for positional encoding in MPNNs and Graph Transformers. The expressive power of such Spectrally-enhanced GNNs (SGNNs) is usually evaluated via the k-WL graph isomorphism test hierarchy and homomorphism counting. Yet, these frameworks align poorly with the graph spectra, yielding limited insight into SGNNs' expressive power. We leverage a well-studied paradigm of classifying graphs by their largest eigenvalue multiplicity to introduce an expressivity hierarchy for SGNNs. We then prove that many SGNNs are incomplete even on graphs with distinct eigenvalues. To mitigate this deficiency, we adapt rotation equivariant neural networks to the graph spectra setting to propose a method to provably improve SGNNs' expressivity on simple spectrum graphs. We empirically verify our theoretical claims via an image classification experiment on the MNIST Superpixel dataset and eigenvector canonicalization on graphs from ZINC.
Problem

Research questions and friction points this paper is trying to address.

SGNNs lack completeness on simple spectrum graphs
Current frameworks poorly evaluate SGNN expressive power
Proposing rotation equivariant networks to enhance SGNN expressivity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Classify graphs by largest eigenvalue multiplicity
Propose rotation equivariant networks for spectra
Improve SGNN expressivity on simple spectrum graphs
🔎 Similar Papers
No similar papers found.
S
Snir Hordan
Faculty of Mathematics, Department of Applied Mathematics, Technion - Israel Institute of Technology
Maya Bechler-Speicher
Maya Bechler-Speicher
Research Scientist, Meta | PhD CS@Tel-Aviv University
Machine LearningGraph Machine LearningGraph Neural Networks
G
Gur Lifshitz
Blavatnik School of Computer Science, Tel-Aviv University
N
Nadav Dym
Faculty of Mathematics, Faculty of Computer Science, Technion - Israel Institute of Technology