🤖 AI Summary
This work addresses the performance degradation of pre-trained graph models on cross-domain downstream tasks caused by distributional shift. To mitigate this issue, the authors propose a dual-branch graph prompt learning framework that integrates a frozen pre-trained branch with a lightweight, task-adaptive branch equipped with learnable adapters. The two branches are adaptively fused through a combination of contrastive loss and topological consistency loss. Theoretical analysis provides the first formal guarantee that jointly leveraging pre-trained knowledge and task-specific adaptation effectively reduces estimation error, motivating the explicit dual-branch architecture. Extensive experiments demonstrate that the proposed method consistently outperforms existing approaches on both cross-domain few-shot node classification and graph classification tasks, confirming its effectiveness and robustness.
📝 Abstract
Graph Prompt Learning (GPL) has recently emerged as a promising paradigm for downstream adaptation of pre-trained graph models, mitigating the misalignment between pre-training objectives and downstream tasks. Recently, the focus of GPL has shifted from in-domain to cross-domain scenarios, which is closer to the real world applications, where the pre-training source and downstream target often differ substantially in data distribution. However, why GPLs remain effective under such domain shifts is still unexplored. Empirically, we observe that representative GPL methods are competitive with two simple baselines in cross-domain settings: full fine-tuning (FT) and linear probing (LP), motivating us to explore a deeper understanding of the prompting mechanism. We provide a theoretical analysis demonstrating that jointly leveraging these two complementary branches yields a smaller estimation error than using either branch alone, formally proving that cross-domain GPL benefits from the integration between pre-trained knowledge and task-specific adaptation. Based on this insight, we propose GP2F, a dual-branch GPL method that explicitly instantiates the two extremes: (1) a frozen branch that retains pre-trained knowledge, and (2) an adapted branch with lightweight adapters for task-specific adaptation. We then perform adaptive fusion under topology constraints via a contrastive loss and a topology-consistent loss. Extensive experiments on cross-domain few-shot node and graph classification demonstrate that our method outperforms existing methods.