🤖 AI Summary
Early graph prompting methods rely on task-specific designs, limiting generalizability; while universal graph prompting theoretically supports arbitrary prompt functionality, recent selective-node prompting strategies violate its foundational assumptions. Method: We first rigorously prove that prompting *all* nodes is a necessary condition for universal graph prompting. Building on this, we propose LEAP—a framework that constructs universal, full-graph prompts in the input feature space and employs an Actor-Critic reinforcement learning mechanism to dynamically optimize both prompt placement and content. Contribution/Results: Extensive experiments demonstrate that LEAP consistently outperforms fine-tuning and other prompting baselines across graph-level and node-level tasks, diverse pretraining paradigms (e.g., contrastive, masked autoencoding), and both full-data and few-shot settings. LEAP thus bridges theoretical soundness—guaranteeing universality—with strong empirical generalization.
📝 Abstract
Early graph prompt tuning approaches relied on task-specific designs for Graph Neural Networks (GNNs), limiting their adaptability across diverse pre-training strategies. In contrast, another promising line of research has investigated universal graph prompt tuning, which operates directly in the input graph's feature space and builds a theoretical foundation that universal graph prompt tuning can theoretically achieve an equivalent effect of any prompting function, eliminating dependence on specific pre-training strategies. Recent works propose selective node-based graph prompt tuning to pursue more ideal prompts. However, we argue that selective node-based graph prompt tuning inevitably compromises the theoretical foundation of universal graph prompt tuning. In this paper, we strengthen the theoretical foundation of universal graph prompt tuning by introducing stricter constraints, demonstrating that adding prompts to all nodes is a necessary condition for achieving the universality of graph prompts. To this end, we propose a novel model and paradigm, Learning and Editing Universal GrAph Prompt Tuning (LEAP), which preserves the theoretical foundation of universal graph prompt tuning while pursuing more ideal prompts. Specifically, we first build the basic universal graph prompts to preserve the theoretical foundation and then employ actor-critic reinforcement learning to select nodes and edit prompts. Extensive experiments on graph- and node-level tasks across various pre-training strategies in both full-shot and few-shot scenarios show that LEAP consistently outperforms fine-tuning and other prompt-based approaches.