🤖 AI Summary
This work addresses key challenges in unsupervised domain adaptation for graph classification—namely, label scarcity in the target domain, insufficient topological modeling, and substantial domain shift. To this end, we propose a dual-path coupled contrastive learning framework. Methodologically, we introduce the first integration of implicit graph convolutional networks (GCNs) and explicit hierarchical graph kernels (HGKs) into a two-branch architecture: the GCN branch captures local structural patterns, while the HGK branch encodes global semantic motifs. These branches are jointly optimized via multi-view coupled contrastive learning to achieve cross-domain semantic alignment and complementary representation enhancement. Crucially, the framework operates without any target-domain labels, mitigating both inadequate topological exploration and domain shift. Extensive experiments on multiple benchmark datasets demonstrate that our approach consistently outperforms state-of-the-art methods across diverse domain transfer settings, exhibiting superior robustness and generalization capability.
📝 Abstract
Although graph neural networks (GNNs) have achieved impressive achievements in graph classification, they often need abundant task-specific labels, which could be extensively costly to acquire. A credible solution is to explore additional labeled graphs to enhance unsupervised learning on the target domain. However, how to apply GNNs to domain adaptation remains unsolved owing to the insufficient exploration of graph topology and the significant domain discrepancy. In this paper, we propose Coupled Contrastive Graph Representation Learning (CoCo), which extracts the topological information from coupled learning branches and reduces the domain discrepancy with coupled contrastive learning. CoCo contains a graph convolutional network branch and a hierarchical graph kernel network branch, which explore graph topology in implicit and explicit manners. Besides, we incorporate coupled branches into a holistic multi-view contrastive learning framework, which not only incorporates graph representations learned from complementary views for enhanced understanding, but also encourages the similarity between cross-domain example pairs with the same semantics for domain alignment. Extensive experiments on popular datasets show that our CoCo outperforms these competing baselines in different settings generally.