Graph Self-Supervised Learning with Learnable Structural and Positional Encodings

📅 2025-02-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional graph self-supervised learning (GSSL) suffers from limited expressiveness of GNNs in modeling complex topologies and neglects intermediate structural information in self-supervision objectives, making it difficult to distinguish graphs that are locally similar but globally topologically distinct. Method: We propose GenHopNet, the first framework to jointly embed learnable structural and positional encodings into the entire k-hop message-passing process, enabling topology-aware hierarchical representation learning. We theoretically prove its expressive power strictly surpasses the Weisfeiler–Lehman (WL) test. Furthermore, we design a structural–positional co-aware self-supervised paradigm that balances topological sensitivity and robustness. Results: GenHopNet achieves significant improvements over state-of-the-art methods on multiple graph classification benchmarks—including structure-sensitive datasets—while maintaining efficient computational performance.

Technology Category

Application Category

📝 Abstract
Traditional Graph Self-Supervised Learning (GSSL) struggles to capture complex structural properties well. This limitation stems from two main factors: (1) the inadequacy of conventional Graph Neural Networks (GNNs) in representing sophisticated topological features, and (2) the focus of self-supervised learning solely on final graph representations. To address these issues, we introduce emph{GenHopNet}, a GNN framework that integrates a $k$-hop message-passing scheme, enhancing its ability to capture local structural information without explicit substructure extraction. We theoretically demonstrate that emph{GenHopNet} surpasses the expressiveness of the classical Weisfeiler-Lehman (WL) test for graph isomorphism. Furthermore, we propose a structural- and positional-aware GSSL framework that incorporates topological information throughout the learning process. This approach enables the learning of representations that are both sensitive to graph topology and invariant to specific structural and feature augmentations. Comprehensive experiments on graph classification datasets, including those designed to test structural sensitivity, show that our method consistently outperforms the existing approaches and maintains computational efficiency. Our work significantly advances GSSL's capability in distinguishing graphs with similar local structures but different global topologies.
Problem

Research questions and friction points this paper is trying to address.

Enhance Graph Self-Supervised Learning
Capture complex structural properties
Integrate topological information effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates k-hop message-passing scheme
Proposes structural-aware GSSL framework
Enhances local structural information capture
🔎 Similar Papers
No similar papers found.