Graph Contrastive Learning via Spectral Graph Alignment

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In graph contrastive learning, structural mismatches arise due to the lack of global consistency among view-specific embeddings—leading to ill-defined graph-graph structures. To address this, we propose SpecMatch-CL, a novel method that aligns view-specific graph structures via normalized Laplacian matrix alignment. SpecMatch-CL is the first to integrate spectral graph theory into contrastive learning, introducing a differentiable loss function grounded in spectral matching. We theoretically derive its differential upper bound, demonstrating that it jointly enforces structural alignment and embedding uniformity. The method introduces no additional parameters and seamlessly integrates with multi-view GNN architectures. Extensive experiments show state-of-the-art performance on eight TU datasets under both unsupervised and low-label-rate semi-supervised settings. Moreover, SpecMatch-CL achieves significant gains on large-scale transfer tasks—PPI-306K and ZINC-2M—validating the critical role of global structural alignment in graph representation learning.

Technology Category

Application Category

📝 Abstract
Given augmented views of each input graph, contrastive learning methods (e.g., InfoNCE) optimize pairwise alignment of graph embeddings across views while providing no mechanism to control the global structure of the view specific graph-of-graphs built from these embeddings. We introduce SpecMatch-CL, a novel loss function that aligns the view specific graph-of-graphs by minimizing the difference between their normalized Laplacians. Theoretically, we show that under certain assumptions, the difference between normalized Laplacians provides an upper bound not only for the difference between the ideal Perfect Alignment contrastive loss and the current loss, but also for the Uniformly loss. Empirically, SpecMatch-CL establishes new state of the art on eight TU benchmarks under unsupervised learning and semi-supervised learning at low label rates, and yields consistent gains in transfer learning on PPI-306K and ZINC 2M datasets.
Problem

Research questions and friction points this paper is trying to address.

Aligns graph embeddings across augmented views
Controls global structure of graph-of-graphs
Improves unsupervised and semi-supervised graph learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Minimizes normalized Laplacian differences between graph views
Introduces SpecMatch-CL loss for spectral graph alignment
Improves unsupervised and semi-supervised graph learning benchmarks
🔎 Similar Papers
No similar papers found.