Adaptive Multi-view Graph Contrastive Learning via Fractional-order Neural Diffusion Networks

📅 2025-11-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing graph contrastive learning methods rely on handcrafted, fixed views (e.g., local/global), limiting their ability to adaptively capture multi-scale structural patterns. To address this, we propose an adaptive multi-view graph contrastive learning framework based on fractional-order neural diffusion networks. Our method employs a learnable fractional-order derivative α ∈ (0,1] to continuously modulate the scope of information propagation, enabling end-to-end generation of diverse node representations spanning local to global scales. By parameterizing diffusion scales as a continuous dynamical process, it eliminates dependence on discrete view design and manual data augmentation. Integrating fractional calculus, continuous-time graph neural networks, and contrastive learning, our approach significantly enhances representation discriminability and robustness. Extensive experiments demonstrate consistent and substantial improvements over state-of-the-art graph contrastive learning methods across standard benchmarks.

Technology Category

Application Category

📝 Abstract
Graph contrastive learning (GCL) learns node and graph representations by contrasting multiple views of the same graph. Existing methods typically rely on fixed, handcrafted views-usually a local and a global perspective, which limits their ability to capture multi-scale structural patterns. We present an augmentation-free, multi-view GCL framework grounded in fractional-order continuous dynamics. By varying the fractional derivative order $alpha in (0,1]$, our encoders produce a continuous spectrum of views: small $alpha$ yields localized features, while large $alpha$ induces broader, global aggregation. We treat $alpha$ as a learnable parameter so the model can adapt diffusion scales to the data and automatically discover informative views. This principled approach generates diverse, complementary representations without manual augmentations. Extensive experiments on standard benchmarks demonstrate that our method produces more robust and expressive embeddings and outperforms state-of-the-art GCL baselines.
Problem

Research questions and friction points this paper is trying to address.

Adaptively captures multi-scale structural patterns in graphs
Learns optimal diffusion scales without manual view augmentation
Generates robust embeddings through fractional-order neural diffusion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fractional-order neural diffusion generates multi-scale graph views
Learnable fractional derivative adapts diffusion scales automatically
Augmentation-free contrastive learning creates diverse graph representations
🔎 Similar Papers
2023-06-08International Conference on Machine LearningCitations: 43
Yanan Zhao
Yanan Zhao
NTU - NANYANG TECHNOLOGICAL UNIVERSITY
signal and information processinggraph generationdiffusion model
F
Feng Ji
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
J
Jingyang Dai
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
J
Jiaze Ma
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
Keyue Jiang
Keyue Jiang
University College London
Diffusion/Flow MdoelsGeometric Generative ModelsStatistical Machine Learning
K
Kai Zhao
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
Wee Peng Tay
Wee Peng Tay
Nanyang Technological University
information processinggraph signal processinggraph neural networksrobust machine learning