Graph Contrastive Learning versus Untrained Baselines: The Role of Dataset Size

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The true effectiveness of graph contrastive learning (GCL) relative to untrained baselines—such as randomly initialized GNNs, MLPs, or handcrafted features—remains unclear, particularly under varying data scales and task difficulties. Method: We conduct large-scale ablation studies across standard benchmarks (e.g., ogbg-molhiv) and synthetic graph datasets, systematically varying dataset size and task complexity. Contribution/Results: We find that GCL’s performance advantage emerges only beyond several thousand samples; gains are marginal or absent in small-sample regimes. Performance scales logarithmically with data volume but exhibits a pronounced saturation ceiling. This work identifies data scale as a critical latent variable governing GCL benchmark performance—previously unaccounted for in standard evaluations. Our findings challenge prevailing assessment paradigms and advocate for explicit control and reporting of dataset size in graph representation learning benchmarks. These results provide empirical grounding for the realistic positioning of GCL and inform principled design of future self-supervised graph learning methods.

Technology Category

Application Category

📝 Abstract
Graph Contrastive Learning (GCL) has emerged as a leading paradigm for self- supervised learning on graphs, with strong performance reported on standardized datasets and growing applications ranging from genomics to drug discovery. We ask a basic question: does GCL actually outperform untrained baselines? We find that GCL's advantage depends strongly on dataset size and task difficulty. On standard datasets, untrained Graph Neural Networks (GNNs), simple multilayer perceptrons, and even handcrafted statistics can rival or exceed GCL. On the large molecular dataset ogbg-molhiv, we observe a crossover: GCL lags at small scales but pulls ahead beyond a few thousand graphs, though this gain eventually plateaus. On synthetic datasets, GCL accuracy approximately scales with the logarithm of the number of graphs and its performance gap (compared with untrained GNNs) varies with respect to task complexity. Moving forward, it is crucial to identify the role of dataset size in benchmarks and applications, as well as to design GCL algorithms that avoid performance plateaus.
Problem

Research questions and friction points this paper is trying to address.

Evaluating GCL performance against untrained baselines across datasets
Assessing dataset size impact on GCL versus untrained methods
Investigating GCL performance scaling with task complexity and size
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates GCL against untrained baselines
Analyzes performance scaling with dataset size
Identifies performance plateau in large datasets
🔎 Similar Papers
No similar papers found.