🤖 AI Summary
This paper addresses conditional shift in unsupervised Graph Domain Adaptation (GDA), caused by local dependencies among node features. We theoretically establish, for the first time, that such local dependencies are a necessary condition for conditional shift and derive a generalization bound for GDA via Markov chain modeling. To mitigate this issue, we propose a novel decoupling paradigm: a hybrid architecture comprising a decoupled graph convolutional layer and a graph Transformer layer, integrated with representation decorrelation regularization and explicit conditional shift analysis. Extensive experiments on multiple benchmark datasets demonstrate significant improvements over state-of-the-art methods; notably, intra-class representation distances are substantially reduced, validating the efficacy of feature decoupling for cross-graph knowledge transfer. The implementation is publicly available.
📝 Abstract
Recent years have witnessed significant advancements in machine learning methods on graphs. However, transferring knowledge effectively from one graph to another remains a critical challenge. This highlights the need for algorithms capable of applying information extracted from a source graph to an unlabeled target graph, a task known as unsupervised graph domain adaptation (GDA). One key difficulty in unsupervised GDA is conditional shift, which hinders transferability. In this paper, we show that conditional shift can be observed only if there exists local dependencies among node features. To support this claim, we perform a rigorous analysis and also further provide generalization bounds of GDA when dependent node features are modeled using markov chains. Guided by the theoretical findings, we propose to improve GDA by decorrelating node features, which can be specifically implemented through decorrelated GCN layers and graph transformer layers. Our experimental results demonstrate the effectiveness of this approach, showing not only substantial performance enhancements over baseline GDA methods but also clear visualizations of small intra-class distances in the learned representations. Our code is available at https://github.com/TechnologyAiGroup/DFT