Less is More: Towards Simple Graph Contrastive Learning

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited performance of graph contrastive learning (GCL) on heterophilic graphs, this paper proposes a lightweight, augmentation-free, and negative-sample-free dual-encoder contrastive framework. Methodologically, it employs a GCN encoder to capture structure-aware representations and an MLP encoder to suppress feature noise, naturally modeling raw node features and graph topology as complementary views; structural guidance is incorporated into the contrastive loss for end-to-end unsupervised training. Theoretical analysis shows that this design effectively mitigates feature confusion induced by heterophilous neighborhoods. Experiments demonstrate state-of-the-art performance on mainstream heterophilic graph benchmarks, with significantly lower computational and memory overhead than existing methods. Moreover, the framework exhibits superior scalability, controllable complexity, and enhanced adversarial robustness on homophilic graphs.

Technology Category

Application Category

📝 Abstract
Graph Contrastive Learning (GCL) has shown strong promise for unsupervised graph representation learning, yet its effectiveness on heterophilic graphs, where connected nodes often belong to different classes, remains limited. Most existing methods rely on complex augmentation schemes, intricate encoders, or negative sampling, which raises the question of whether such complexity is truly necessary in this challenging setting. In this work, we revisit the foundations of supervised and unsupervised learning on graphs and uncover a simple yet effective principle for GCL: mitigating node feature noise by aggregating it with structural features derived from the graph topology. This observation suggests that the original node features and the graph structure naturally provide two complementary views for contrastive learning. Building on this insight, we propose an embarrassingly simple GCL model that uses a GCN encoder to capture structural features and an MLP encoder to isolate node feature noise. Our design requires neither data augmentation nor negative sampling, yet achieves state-of-the-art results on heterophilic benchmarks with minimal computational and memory overhead, while also offering advantages in homophilic graphs in terms of complexity, scalability, and robustness. We provide theoretical justification for our approach and validate its effectiveness through extensive experiments, including robustness evaluations against both black-box and white-box adversarial attacks.
Problem

Research questions and friction points this paper is trying to address.

Improving graph contrastive learning on heterophilic graphs
Simplifying GCL by eliminating complex augmentations and negative sampling
Mitigating node feature noise using complementary graph structure
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses GCN encoder for structural features
Employs MLP encoder to isolate noise
Requires no data augmentation or negative sampling
🔎 Similar Papers
No similar papers found.
Yanan Zhao
Yanan Zhao
NTU - NANYANG TECHNOLOGICAL UNIVERSITY
signal and information processinggraph generationdiffusion model
F
Feng Ji
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
J
Jingyang Dai
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
J
Jiaze Ma
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
Wee Peng Tay
Wee Peng Tay
Nanyang Technological University
information processinggraph signal processinggraph neural networksrobust machine learning