Beyond the Laplacian: Interpolated Spectral Augmentation for Graph Neural Networks

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the degradation of graph neural network (GNN) performance caused by sparse or missing node features in real-world graph data, this paper proposes Interpolated Laplacian Embeddings (ILEs)—a differentiable and interpretable spectral feature enhancement method grounded in a family of graph Laplacian matrices. ILEs construct generalized spectral representations via continuous interpolation, explicitly encoding multi-scale topological structural information and overcoming the limited expressivity of conventional discrete spectral embeddings. The method requires no additional supervision and is plug-and-play, enabling end-to-end joint optimization with diverse GNN architectures. Extensive experiments on multiple real-world graph benchmarks demonstrate that ILEs significantly improve node classification accuracy—particularly when raw feature dimensions are extremely low (e.g., ≤5) or entirely absent. This work establishes a novel paradigm for spectral enhancement and provides theoretical foundations for learning expressive, topology-aware node representations.

Technology Category

Application Category

📝 Abstract
Graph neural networks (GNNs) are fundamental tools in graph machine learning. The performance of GNNs relies crucially on the availability of informative node features, which can be limited or absent in real-life datasets and applications. A natural remedy is to augment the node features with embeddings computed from eigenvectors of the graph Laplacian matrix. While it is natural to default to Laplacian spectral embeddings, which capture meaningful graph connectivity information, we ask whether spectral embeddings from alternative graph matrices can also provide useful representations for learning. We introduce Interpolated Laplacian Embeddings (ILEs), which are derived from a simple yet expressive family of graph matrices. Using tools from spectral graph theory, we offer a straightforward interpretation of the structural information that ILEs capture. We demonstrate through simulations and experiments on real-world datasets that feature augmentation via ILEs can improve performance across commonly used GNN architectures. Our work offers a straightforward and practical approach that broadens the practitioner's spectral augmentation toolkit when node features are limited.
Problem

Research questions and friction points this paper is trying to address.

Addressing limited node features in graph neural networks
Exploring spectral embeddings beyond standard Laplacian matrices
Enhancing GNN performance through interpolated spectral augmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interpolated Laplacian Embeddings from alternative graph matrices
Feature augmentation using expressive spectral graph families
Broadening spectral augmentation toolkit for limited node features