🤖 AI Summary
Existing graph contrastive learning (GCL) methods primarily focus on implicit semantic modeling, neglecting explicit structural commonsense knowledge embedded in graph topology and node/edge attributes—thereby limiting representation capacity. To address this, we propose Str-GCL, the first GCL framework to explicitly incorporate structured commonsense knowledge. It formalizes domain-specific topological and attribute-level commonsense as first-order logic rules and introduces a representation alignment mechanism to guide graph neural network encoders in capturing deep structural knowledge. This establishes a novel logic-rule-driven self-supervised learning paradigm for graphs. Extensive experiments on multiple benchmark datasets demonstrate that Str-GCL consistently and significantly outperforms state-of-the-art GCL methods. Results validate that integrating structured commonsense substantially enhances representation quality, generalization capability, and model interpretability.
📝 Abstract
Graph Contrastive Learning (GCL) is a widely adopted approach in self-supervised graph representation learning, applying contrastive objectives to produce effective representations. However, current GCL methods primarily focus on capturing implicit semantic relationships, often overlooking the structural commonsense embedded within the graph's structure and attributes, which contains underlying knowledge crucial for effective representation learning. Due to the lack of explicit information and clear guidance in general graph, identifying and integrating such structural commonsense in GCL poses a significant challenge. To address this gap, we propose a novel framework called Structural Commonsense Unveiling in Graph Contrastive Learning (Str-GCL). Str-GCL leverages first-order logic rules to represent structural commonsense and explicitly integrates them into the GCL framework. It introduces topological and attribute-based rules without altering the original graph and employs a representation alignment mechanism to guide the encoder in effectively capturing this commonsense. To the best of our knowledge, this is the first attempt to directly incorporate structural commonsense into GCL. Extensive experiments demonstrate that Str-GCL outperforms existing GCL methods, providing a new perspective on leveraging structural commonsense in graph representation learning.