One Prompt Fits All: Universal Graph Adaptation for Pretrained Models

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing graph prompt learning (GPL) methods lack a unified understanding of how prompts interact with pre-trained models and suffer from poor generalization under distribution shifts (e.g., homogeneous → heterogeneous graphs). To address this, we propose UniPrompt—a theoretically grounded, universal graph adaptation framework that systematically characterizes the intrinsic role of representation-level prompts in GPL. Its core is a lightweight, model-agnostic adapter module that injects learnable prompts solely into the representation space, preserving the original graph structure and requiring no architectural modifications to the pre-trained model. Extensive experiments demonstrate that UniPrompt enables plug-and-play adaptation across diverse pre-trained graph models, consistently improving performance on both in-domain and out-of-domain downstream tasks—including those involving heterogeneous graphs—thereby overcoming the generalization bottleneck inherent in prior GPL approaches.

Technology Category

Application Category

📝 Abstract
Graph Prompt Learning (GPL) has emerged as a promising paradigm that bridges graph pretraining models and downstream scenarios, mitigating label dependency and the misalignment between upstream pretraining and downstream tasks. Although existing GPL studies explore various prompt strategies, their effectiveness and underlying principles remain unclear. We identify two critical limitations: (1) Lack of consensus on underlying mechanisms: Despite current GPLs have advanced the field, there is no consensus on how prompts interact with pretrained models, as different strategies intervene at varying spaces within the model, i.e., input-level, layer-wise, and representation-level prompts. (2) Limited scenario adaptability: Most methods fail to generalize across diverse downstream scenarios, especially under data distribution shifts (e.g., homophilic-to-heterophilic graphs). To address these issues, we theoretically analyze existing GPL approaches and reveal that representation-level prompts essentially function as fine-tuning a simple downstream classifier, proposing that graph prompt learning should focus on unleashing the capability of pretrained models, and the classifier adapts to downstream scenarios. Based on our findings, we propose UniPrompt, a novel GPL method that adapts any pretrained models, unleashing the capability of pretrained models while preserving the structure of the input graph. Extensive experiments demonstrate that our method can effectively integrate with various pretrained models and achieve strong performance across in-domain and cross-domain scenarios.
Problem

Research questions and friction points this paper is trying to address.

Investigating unclear mechanisms and limitations in Graph Prompt Learning
Addressing poor adaptability of GPL methods across diverse graph scenarios
Developing universal prompt method to unleash pretrained models' capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Universal prompt adapts any pretrained graph models
Unleashes pretrained model capability while preserving graph structure
Focuses on representation-level prompts for downstream classifier adaptation
🔎 Similar Papers
No similar papers found.