🤖 AI Summary
Existing graph prompting methods primarily focus on feature-level modifications while neglecting the promptability of graph topology, thereby limiting the adaptability of pretrained graph neural networks (GNNs). To address this, we propose GraphTOP—the first topology-oriented graph prompting framework—that models graph structural rewiring as a differentiable sparse optimization problem in a continuous probability space. Specifically, GraphTOP performs local, learnable topological adjustments over multi-hop subgraphs via edge rewiring and reparameterization, without fine-tuning any GNN parameters and keeping the backbone model frozen. Evaluated on five graph datasets under four distinct pretraining paradigms, GraphTOP achieves statistically significant improvements over six state-of-the-art baselines on node classification tasks. These results empirically validate both the effectiveness and generalizability of topology-aware prompting, establishing a new direction for parameter-efficient adaptation of pretrained GNNs.
📝 Abstract
Graph Neural Networks (GNNs) have revolutionized the field of graph learning by learning expressive graph representations from massive graph data. As a common pattern to train powerful GNNs, the "pre-training, adaptation" scheme first pre-trains GNNs over unlabeled graph data and subsequently adapts them to specific downstream tasks. In the adaptation phase, graph prompting is an effective strategy that modifies input graph data with learnable prompts while keeping pre-trained GNN models frozen. Typically, existing graph prompting studies mainly focus on *feature-oriented* methods that apply graph prompts to node features or hidden representations. However, these studies often achieve suboptimal performance, as they consistently overlook the potential of *topology-oriented* prompting, which adapts pre-trained GNNs by modifying the graph topology. In this study, we conduct a pioneering investigation of graph prompting in terms of graph topology. We propose the first **Graph** **T**opology-**O**riented **P**rompting (GraphTOP) framework to effectively adapt pre-trained GNN models for downstream tasks. More specifically, we reformulate topology-oriented prompting as an edge rewiring problem within multi-hop local subgraphs and relax it into the continuous probability space through reparameterization while ensuring tight relaxation and preserving graph sparsity. Extensive experiments on five graph datasets under four pre-training strategies demonstrate that our proposed GraphTOP outshines six baselines on multiple node classification datasets. Our code is available at https://github.com/xbfu/GraphTOP.