Fractal Graph Contrastive Learning

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph contrastive learning (GCL) suffers from global structural inconsistency in positive sample pairs, as existing graph augmentation methods lack explicit modeling of topological consistency. To address this, we propose a fractal self-similarity–driven contrastive learning framework. First, we introduce fractal renormalization augmentation—based on box-counting—to explicitly preserve multi-scale topological self-similarity. Second, we design a fractal dimension–aware contrastive loss, theoretically proving its weak convergence with respect to fractal dimension discrepancy, and propose a single-shot estimator to reduce computational overhead. By unifying fractal geometry, graph renormalization, and contrastive learning, our method achieves state-of-the-art performance on standard graph benchmarks. On traffic network datasets, it improves average accuracy by 7% and reduces training time by 61%.

Technology Category

Application Category

📝 Abstract
While Graph Contrastive Learning (GCL) has attracted considerable attention in the field of graph self-supervised learning, its performance heavily relies on data augmentations that are expected to generate semantically consistent positive pairs. Existing strategies typically resort to random perturbations or local structure preservation, yet lack explicit control over global structural consistency between augmented views. To address this limitation, we propose Fractal Graph Contrastive Learning (FractalGCL), a theory-driven framework that leverages fractal self-similarity to enforce global topological coherence. FractalGCL introduces two key innovations: a renormalisation-based augmentation that generates structurally aligned positive views via box coverings; and a fractal-dimension-aware contrastive loss that aligns graph embeddings according to their fractal dimensions. While combining the two innovations markedly boosts graph-representation quality, it also adds non-trivial computational overhead. To mitigate the computational overhead of fractal dimension estimation, we derive a one-shot estimator by proving that the dimension discrepancy between original and renormalised graphs converges weakly to a centred Gaussian distribution. This theoretical insight enables a reduction in dimension computation cost by an order of magnitude, cutting overall training time by approximately 61%. The experiments show that FractalGCL not only delivers state-of-the-art results on standard benchmarks but also outperforms traditional baselines on traffic networks by an average margin of about remarkably 7%. Codes are available at (https://anonymous.4open.science/r/FractalGCL-0511).
Problem

Research questions and friction points this paper is trying to address.

Enhance global structural consistency in graph contrastive learning
Reduce computational overhead of fractal dimension estimation
Improve graph-representation quality with fractal self-similarity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages fractal self-similarity for global coherence
Renormalisation-based augmentation aligns structural views
One-shot estimator reduces computational overhead significantly
🔎 Similar Papers
No similar papers found.