๐ค AI Summary
Existing graph self-supervised learning methods suffer from high computational overhead, reliance on contrastive loss and negative sampling, susceptibility to representation collapse, and difficulty in quantifying the semantic contribution of node embeddings to downstream tasks. To address these issues, we propose a contrastive-free, negative-sampling-free joint embedding prediction framework. Our approach introduces a subgraph-level single-contextโmulti-target joint prediction mechanism that jointly leverages structural and semantic information; employs a Gaussian Mixture Model (GMM)-driven semantic contribution scoring strategy to generate high-quality pseudo-labels; and incorporates Bayesian inference for robust self-training. Evaluated on multiple benchmark datasets, the framework achieves significant improvements in node classification and link prediction performance, while exhibiting higher training efficiency and effectively mitigating representation collapse.
๐ Abstract
Graph representation learning has emerged as a cornerstone for tasks like node classification and link prediction, yet prevailing self-supervised learning (SSL) methods face challenges such as computational inefficiency, reliance on contrastive objectives, and representation collapse. Existing approaches often depend on feature reconstruction, negative sampling, or complex decoders, which introduce training overhead and hinder generalization. Further, current techniques which address such limitations fail to account for the contribution of node embeddings to a certain prediction in the absence of labeled nodes. To address these limitations, we propose a novel joint embedding predictive framework for graph SSL that eliminates contrastive objectives and negative sampling while preserving semantic and structural information. Additionally, we introduce a semantic-aware objective term that incorporates pseudo-labels derived from Gaussian Mixture Models (GMMs), enhancing node discriminability by evaluating latent feature contributions. Extensive experiments demonstrate that our framework outperforms state-of-the-art graph SSL methods across benchmarks, achieving superior performance without contrastive loss or complex decoders. Key innovations include (1) a non-contrastive, view-invariant joint embedding predictive architecture, (2) Leveraging single context and multiple targets relationship between subgraphs, and (3) GMM-based pseudo-label scoring to capture semantic contributions. This work advances graph SSL by offering a computationally efficient, collapse-resistant paradigm that bridges spatial and semantic graph features for downstream tasks. The code for our paper can be found at https://github.com/Deceptrax123/JPEB-GSSL