π€ AI Summary
This work addresses the limitation of existing graph neural network (GNN) prompting methods, which still rely on task-specific parameter fine-tuning and thus fail to achieve true parameter-free generalization across graphs. To overcome this, we propose a novel fine-tuning-free cross-graph GNN prompting framework that, for the first time, enables effective prompt-based learning on both homogenous and heterogeneous graphs without updating any model parameters, thereby supporting plug-and-play inference. By introducing a unified design for cross-graph generalization, our method substantially improves few-shot prediction performance, achieving an average accuracy gain of 30.8%βwith improvements reaching up to 54%βacross multiple tasks, significantly outperforming current state-of-the-art approaches.
π Abstract
GNN prompting aims to adapt models across tasks and graphs without requiring extensive retraining. However, most existing graph prompt methods still require task-specific parameter updates and face the issue of generalizing across graphs, limiting their performance and undermining the core promise of prompting. In this work, we introduce a Cross-graph Tuning-free Prompting Framework (CTP), which supports both homogeneous and heterogeneous graphs, can be directly deployed to unseen graphs without further parameter tuning, and thus enables a plug-and-play GNN inference engine. Extensive experiments on few-shot prediction tasks show that, compared to SOTAs, CTP achieves an average accuracy gain of 30.8% and a maximum gain of 54%, confirming its effectiveness and offering a new perspective on graph prompt learning.