🤖 AI Summary
Existing graph prompt learning (GPL) methods lack a unified understanding of how prompts interact with pre-trained models and suffer from poor generalization under distribution shifts (e.g., homogeneous → heterogeneous graphs). To address this, we propose UniPrompt—a theoretically grounded, universal graph adaptation framework that systematically characterizes the intrinsic role of representation-level prompts in GPL. Its core is a lightweight, model-agnostic adapter module that injects learnable prompts solely into the representation space, preserving the original graph structure and requiring no architectural modifications to the pre-trained model. Extensive experiments demonstrate that UniPrompt enables plug-and-play adaptation across diverse pre-trained graph models, consistently improving performance on both in-domain and out-of-domain downstream tasks—including those involving heterogeneous graphs—thereby overcoming the generalization bottleneck inherent in prior GPL approaches.
📝 Abstract
Graph Prompt Learning (GPL) has emerged as a promising paradigm that bridges graph pretraining models and downstream scenarios, mitigating label dependency and the misalignment between upstream pretraining and downstream tasks. Although existing GPL studies explore various prompt strategies, their effectiveness and underlying principles remain unclear. We identify two critical limitations: (1) Lack of consensus on underlying mechanisms: Despite current GPLs have advanced the field, there is no consensus on how prompts interact with pretrained models, as different strategies intervene at varying spaces within the model, i.e., input-level, layer-wise, and representation-level prompts. (2) Limited scenario adaptability: Most methods fail to generalize across diverse downstream scenarios, especially under data distribution shifts (e.g., homophilic-to-heterophilic graphs). To address these issues, we theoretically analyze existing GPL approaches and reveal that representation-level prompts essentially function as fine-tuning a simple downstream classifier, proposing that graph prompt learning should focus on unleashing the capability of pretrained models, and the classifier adapts to downstream scenarios. Based on our findings, we propose UniPrompt, a novel GPL method that adapts any pretrained models, unleashing the capability of pretrained models while preserving the structure of the input graph. Extensive experiments demonstrate that our method can effectively integrate with various pretrained models and achieve strong performance across in-domain and cross-domain scenarios.