Boosting Graph Neural Network Expressivity with Learnable Lanczos Constraints

๐Ÿ“… 2024-08-22
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
GNNs often suffer from limited expressivity in link prediction due to the message-passing paradigmโ€™s equivalence to the 1-WL test. To address this, we propose LLwLCโ€”a novel framework that transcends conventional GNN limitations. First, links are mapped to induced subgraphs, whose spectral embeddings are derived from the eigenvector basis of the Laplacian matrix. Second, a learnable Lanczos algorithm coupled with linear constraints enables efficient and accurate spectral approximation. Third, we introduce two innovations: (i) vertex-deletion subgraph encoding to enhance structural discriminability, and (ii) Neumann eigenvalue constraints to strengthen link semantic representation. Theoretically, LLwLC is provably more expressive than 1-WL. Empirically, it achieves state-of-the-art performance on PubMed and OGBL-Vessel using only 5% and 10% of training data, respectively, while accelerating inference by 20ร— and 10ร— compared to prior methods.

Technology Category

Application Category

๐Ÿ“ Abstract
Graph Neural Networks (GNNs) excel in handling graph-structured data but often underperform in link prediction tasks compared to classical methods, mainly due to the limitations of the commonly used message-passing principle. Notably, their ability to distinguish non-isomorphic graphs is limited by the 1-dimensional Weisfeiler-Lehman test. Our study presents a novel method to enhance the expressivity of GNNs by embedding induced subgraphs into the graph Laplacian matrix's eigenbasis. We introduce a Learnable Lanczos algorithm with Linear Constraints (LLwLC), proposing two novel subgraph extraction strategies: encoding vertex-deleted subgraphs and applying Neumann eigenvalue constraints. For the former, we demonstrate the ability to distinguish graphs that are indistinguishable by 2-WL, while maintaining efficient time complexity. The latter focuses on link representations enabling differentiation between $k$-regular graphs and node automorphism, a vital aspect for link prediction tasks. Our approach results in an extremely lightweight architecture, reducing the need for extensive training datasets. Empirically, our method improves performance in challenging link prediction tasks across benchmark datasets, establishing its practical utility and supporting our theoretical findings. Notably, LLwLC achieves 20x and 10x speedup by only requiring 5% and 10% data from the PubMed and OGBL-Vessel datasets while comparing to the state-of-the-art.
Problem

Research questions and friction points this paper is trying to address.

Enhance GNN expressivity for link prediction
Distinguish non-isomorphic graphs beyond 1-WL
Improve efficiency with lightweight architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learnable Lanczos algorithm
Vertex-deleted subgraphs encoding
Neumann eigenvalue constraints
๐Ÿ”Ž Similar Papers
No similar papers found.