AS-GCL: Asymmetric Spectral Augmentation on Graph Contrastive Learning

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing graph contrastive learning (GCL) methods rely on symmetric random augmentations, neglecting spectral-domain structural invariance, which compromises representation robustness and generalization. To address this, we propose the first spectral-domain asymmetric augmentation framework. First, we design a spectrum-aware perturbation suppression mechanism that generates structurally preserved heterogeneous views in the Laplacian spectral domain. Second, we establish an asymmetric view generation paradigm integrating a shared encoder with distinct diffusion operators. Third, we introduce an upper-bound contrastive loss that jointly enforces intra-class compactness and inter-class separability. Evaluated on node classification across eight benchmark datasets, our method achieves average accuracy improvements of 1.8–3.2% over state-of-the-art GCL approaches, demonstrating the effectiveness of spectral-domain asymmetric augmentation in enhancing representation robustness and generalization performance.

Technology Category

Application Category

📝 Abstract
Graph Contrastive Learning (GCL) has emerged as the foremost approach for self-supervised learning on graph-structured data. GCL reduces reliance on labeled data by learning robust representations from various augmented views. However, existing GCL methods typically depend on consistent stochastic augmentations, which overlook their impact on the intrinsic structure of the spectral domain, thereby limiting the model's ability to generalize effectively. To address these limitations, we propose a novel paradigm called AS-GCL that incorporates asymmetric spectral augmentation for graph contrastive learning. A typical GCL framework consists of three key components: graph data augmentation, view encoding, and contrastive loss. Our method introduces significant enhancements to each of these components. Specifically, for data augmentation, we apply spectral-based augmentation to minimize spectral variations, strengthen structural invariance, and reduce noise. With respect to encoding, we employ parameter-sharing encoders with distinct diffusion operators to generate diverse, noise-resistant graph views. For contrastive loss, we introduce an upper-bound loss function that promotes generalization by maintaining a balanced distribution of intra- and inter-class distance. To our knowledge, we are the first to encode augmentation views of the spectral domain using asymmetric encoders. Extensive experiments on eight benchmark datasets across various node-level tasks demonstrate the advantages of the proposed method.
Problem

Research questions and friction points this paper is trying to address.

Enhance graph contrastive learning
Address spectral domain limitations
Improve generalization with asymmetric augmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Asymmetric spectral augmentation technique
Parameter-sharing encoders with diffusion
Upper-bound loss for generalization
🔎 Similar Papers
No similar papers found.