🤖 AI Summary
Graph self-supervised learning (GSSL) often yields loosely structured and insufficiently discriminative graph embeddings. Method: This paper proposes the Graph Interaction Paradigm (GIP), the first framework to introduce intra-batch random cross-graph edge connections into self-supervised graph learning, enabling direct graph-level message passing. We theoretically show that this mechanism promotes manifold disentanglement, yielding more structured and discriminative embedding spaces. GIP operates as a plug-and-play module, seamlessly integrating with mainstream GSSL methods (e.g., GRACE, GCA) without modifying backbone architectures. Contribution/Results: Extensive experiments on multiple benchmark datasets demonstrate consistent performance gains: GIP improves downstream classification accuracy by 3.2–7.8% on average across diverse backbones. The gains remain stable across model variants, validating both the effectiveness and generalizability of the cross-graph interaction paradigm.
📝 Abstract
Graph self-supervised learning (GSSL) has emerged as a compelling framework for extracting informative representations from graph-structured data without extensive reliance on labeled inputs. In this study, we introduce Graph Interplay (GIP), an innovative and versatile approach that significantly enhances the performance equipped with various existing GSSL methods. To this end, GIP advocates direct graph-level communications by introducing random inter-graph edges within standard batches. Against GIP's simplicity, we further theoretically show that extsc{GIP} essentially performs a principled manifold separation via combining inter-graph message passing and GSSL, bringing about more structured embedding manifolds and thus benefits a series of downstream tasks. Our empirical study demonstrates that GIP surpasses the performance of prevailing GSSL methods across multiple benchmarks by significant margins, highlighting its potential as a breakthrough approach. Besides, GIP can be readily integrated into a series of GSSL methods and consistently offers additional performance gain. This advancement not only amplifies the capability of GSSL but also potentially sets the stage for a novel graph learning paradigm in a broader sense.