🤖 AI Summary
This paper addresses zero-shot generalization of Graph Neural Networks (GNNs) under covariate shift, proposing the first unsupervised prompt learning paradigm for GNNs: it adapts pre-trained GNNs to target graph data without updating GNN parameters or accessing any labeled data, relying solely on learnable prompt functions. Methodologically, we formally define the task of unsupervised prompt learning for GNNs and introduce an end-to-end framework integrating consistency regularization, pseudo-label guidance, graph-structural distribution alignment, and prompt embedding optimization. Evaluated across multiple covariate shift benchmarks, our approach significantly outperforms state-of-the-art supervised prompt methods that require labels—demonstrating superior effectiveness, robustness, and generalization capability. This work establishes a novel, label-free adaptation strategy for deploying GNNs in low-resource settings.
📝 Abstract
Prompt tuning methods for Graph Neural Networks (GNNs) have become popular to address the semantic gap between pre-training and fine-tuning steps. However, existing GNN prompting methods rely on labeled data and involve lightweight fine-tuning for downstream tasks. Meanwhile, in-context learning methods for Large Language Models (LLMs) have shown promising performance with no parameter updating and no or minimal labeled data. Inspired by these approaches, in this work, we first introduce a challenging problem setup to evaluate GNN prompting methods. This setup encourages a prompting function to enhance a pre-trained GNN's generalization to a target dataset under covariate shift without updating the GNN's parameters and with no labeled data. Next, we propose a fully unsupervised prompting method based on consistency regularization through pseudo-labeling. We use two regularization techniques to align the prompted graphs' distribution with the original data and reduce biased predictions. Through extensive experiments under our problem setting, we demonstrate that our unsupervised approach outperforms the state-of-the-art prompting methods that have access to labels.