GraphPrompter: Multi-stage Adaptive Prompt Optimization for Graph In-Context Learning

📅 2025-05-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph-based in-context learning faces two key challenges: (1) random subgraph/edge sampling introduces noisy prompts that degrade performance; and (2) class distribution shift between pretraining and test graphs—e.g., significantly more test classes than training classes—severely impairs zero-shot generalization. To address these, we propose the first three-stage adaptive graph prompt optimization framework: (1) edge importance modeling via graph reconstruction to generate high-quality candidate prompts; (2) semantic-aware prompt selection via dynamic k-nearest neighbor retrieval; and (3) a cache replacement mechanism enabling incremental prompt enhancement. Our method is parameter-free—requiring no gradient updates—and substantially improves cross-graph generalization and few-shot adaptability. Extensive experiments demonstrate an average 8.2% improvement over state-of-the-art baselines across diverse scenarios. The code is publicly available.

Technology Category

Application Category

📝 Abstract
Graph In-Context Learning, with the ability to adapt pre-trained graph models to novel and diverse downstream graphs without updating any parameters, has gained much attention in the community. The key to graph in-context learning is to perform downstream graphs conditioned on chosen prompt examples. Existing methods randomly select subgraphs or edges as prompts, leading to noisy graph prompts and inferior model performance. Additionally, due to the gap between pre-training and testing graphs, when the number of classes in the testing graphs is much greater than that in the training, the in-context learning ability will also significantly deteriorate. To tackle the aforementioned challenges, we develop a multi-stage adaptive prompt optimization method GraphPrompter, which optimizes the entire process of generating, selecting, and using graph prompts for better in-context learning capabilities. Firstly, Prompt Generator introduces a reconstruction layer to highlight the most informative edges and reduce irrelevant noise for graph prompt construction. Furthermore, in the selection stage, Prompt Selector employs the $k$-nearest neighbors algorithm and pre-trained selection layers to dynamically choose appropriate samples and minimize the influence of irrelevant prompts. Finally, we leverage a Prompt Augmenter with a cache replacement strategy to enhance the generalization capability of the pre-trained model on new datasets. Extensive experiments show that GraphPrompter effectively enhances the in-context learning ability of graph models. On average across all the settings, our approach surpasses the state-of-the-art baselines by over 8%. Our code is released at https://github.com/karin0018/GraphPrompter.
Problem

Research questions and friction points this paper is trying to address.

Optimizes graph prompt generation to reduce noise and improve performance
Addresses performance drop when testing graphs have more classes than training
Enhances generalization of pre-trained models on new graph datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reconstruction layer highlights informative edges
KNN and selection layers optimize prompt choice
Cache strategy enhances model generalization
🔎 Similar Papers
No similar papers found.
R
Rui Lv
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China
Zaixi Zhang
Zaixi Zhang
Princeton University
AI for ScienceGenerative AIAI SecurityBioSecurity
K
Kai Zhang
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China
Q
Qi Liu
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China
W
Weibo Gao
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China
J
Jiawei Liu
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China
J
Jiaxia Yan
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China
Linan Yue
Linan Yue
Southeast University
Trustworthy AINatural Language Processing
Fangzhou Yao
Fangzhou Yao
University of Illinois at Urbana-Champaign
Cloud ComputingDistributed Systems