Tensor-Fused Multi-View Graph Contrastive Learning

📅 2024-10-20
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient topological feature exploitation and high computational overhead in graph contrastive learning (GCL), this paper proposes TensCL, a novel tensorized contrastive learning framework. TensCL innovatively integrates extended persistent homology (EPH) with multi-view graph augmentation, unifying graph structure and multi-scale topological features via a tensor aggregation-and-compression mechanism. It further enhances robustness through noise injection into EPH computation and reduces computational complexity by decoupling tensor aggregation from transformation. Evaluated on 11 standard graph classification benchmarks, TensCL significantly outperforms 15 state-of-the-art methods on 9 datasets. It achieves superior performance across diverse domains—including molecular property prediction, bioinformatics, and social network analysis—demonstrating its effectiveness in jointly enhancing topological representation capability and computational efficiency.

Technology Category

Application Category

📝 Abstract
Graph contrastive learning (GCL) has emerged as a promising approach to enhance graph neural networks' (GNNs) ability to learn rich representations from unlabeled graph-structured data. However, current GCL models face challenges with computational demands and limited feature utilization, often relying only on basic graph properties like node degrees and edge attributes. This constrains their capacity to fully capture the complex topological characteristics of real-world phenomena represented by graphs. To address these limitations, we propose Tensor-Fused Multi-View Graph Contrastive Learning (TensorMV-GCL), a novel framework that integrates extended persistent homology (EPH) with GCL representations and facilitates multi-scale feature extraction. Our approach uniquely employs tensor aggregation and compression to fuse information from graph and topological features obtained from multiple augmented views of the same graph. By incorporating tensor concatenation and contraction modules, we reduce computational overhead by separating feature tensor aggregation and transformation. Furthermore, we enhance the quality of learned topological features and model robustness through noise-injected EPH. Experiments on molecular, bioinformatic, and social network datasets demonstrate TensorMV-GCL's superiority, outperforming 15 state-of-the-art methods in graph classification tasks across 9 out of 11 benchmarks while achieving comparable results on the remaining two. The code for this paper is publicly available at https://github.com/CS-SAIL/Tensor-MV-GCL.git.
Problem

Research questions and friction points this paper is trying to address.

Enhances graph neural networks' representation learning from unlabeled data.
Addresses computational demands and limited feature utilization in GCL models.
Captures complex topological characteristics of real-world graph phenomena.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates extended persistent homology with GCL
Uses tensor aggregation for multi-scale features
Reduces computational overhead via tensor modules
🔎 Similar Papers
No similar papers found.