Disentangled Graph Prompting for Out-Of-Distribution Detection

๐Ÿ“… 2026-03-31
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the performance limitations of graph neural networks (GNNs) in out-of-distribution (OOD) detection caused by insufficient supervisory signals. To this end, the authors propose a decoupled graph prompting mechanism that leverages a pretrained GNN encoder and introduces two distinct prompt generatorsโ€”one producing class-specific prompts and the other generating class-agnostic prompts. Crucially, the method explicitly exploits in-distribution (ID) graph labels to guide prompt learning without requiring any OOD training data, thereby circumventing the suboptimality associated with end-to-end training. Prompt graphs are generated via edge-weight modulation and optimized through a multi-objective loss function. Evaluated on ten benchmark datasets, the approach achieves an average AUC improvement of 3.63% over the strongest baseline, with ablation studies confirming the contribution of each component.
๐Ÿ“ Abstract
When testing data and training data come from different distributions, deep neural networks (DNNs) will face significant safety risks in practical applications. Therefore, out-of-distribution (OOD) detection techniques, which can identify OOD samples at test time and alert the system, are urgently needed. Existing graph OOD detection methods usually characterize fine-grained in-distribution (ID) patterns from multiple perspectives, and train end-to-end graph neural networks (GNNs) for prediction. However, due to the unavailability of OOD data during training, the absence of explicit supervision signals could lead to sub-optimal performance of end-to-end encoders. To address this issue, we follow the pre-training+prompting paradigm to utilize pre-trained GNN encoders, and propose Disentangled Graph Prompting (DGP), to capture fine-grained ID patterns with the help of ID graph labels. Specifically, we design two prompt generators that respectively generate class-specific and class-agnostic prompt graphs by modifying the edge weights of an input graph. We also design several effective losses to train the prompt generators and prevent trivial solutions. We conduct extensive experiments on ten datasets to demonstrate the superiority of our proposed DGP, which achieves a relative AUC improvement of 3.63% over the best graph OOD detection baseline. Ablation studies and hyper-parameter experiments further show the effectiveness of DGP. Code is available at https://github.com/BUPT-GAMMA/DGP.
Problem

Research questions and friction points this paper is trying to address.

out-of-distribution detection
graph neural networks
distribution shift
OOD detection
graph representation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disentangled Graph Prompting
Out-of-Distribution Detection
Graph Neural Networks
Prompting Paradigm
Pre-trained GNN
๐Ÿ”Ž Similar Papers
No similar papers found.
Cheng Yang
Cheng Yang
Beijing University of Posts and Telecommunications
Network Representation LearningGraph Neural NetworkNetwork Embedding
Y
Yu Hao
Beijing Key Lab of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, China
Q
Qi Zhang
China Mobile Group Shaanxi Co., Ltd.
Chuan Shi
Chuan Shi
Beijing University of Posts and Telecommunications
data miningmachine learningsocial network analysis