Guarding Graph Neural Networks for Unsupervised Graph Anomaly Detection

๐Ÿ“… 2024-04-25
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In unsupervised graph anomaly detection, anomalies corrupt graph neural network (GNN) representation learning, degrading detection performance. Method: We propose an anomaly-robust GNN learning framework featuring (1) dual auxiliary networks with neighborhood correlation constraints to suppress inconsistent information encoding induced by anomalous neighbors, and (2) an adaptive caching module that dynamically avoids overfitting to anomaly-contaminated observations during unsupervised reconstruction. Contribution/Results: To our knowledge, this is the first work to systematically โ€œshieldโ€ GNNs from anomaly contamination under fully unsupervised settings. By integrating correlation regularization with cache-augmented contrastive reconstruction, our method consistently outperforms 17 state-of-the-art baselines across synthetic and real-world graph datasets, achieving significant gains in both detection accuracy and robustness.

Technology Category

Application Category

๐Ÿ“ Abstract
Unsupervised graph anomaly detection aims at identifying rare patterns that deviate from the majority in a graph without the aid of labels, which is important for a variety of real-world applications. Recent advances have utilized Graph Neural Networks (GNNs) to learn effective node representations by aggregating information from neighborhoods. This is motivated by the hypothesis that nodes in the graph tend to exhibit consistent behaviors with their neighborhoods. However, such consistency can be disrupted by graph anomalies in multiple ways. Most existing methods directly employ GNNs to learn representations, disregarding the negative impact of graph anomalies on GNNs, resulting in sub-optimal node representations and anomaly detection performance. While a few recent approaches have redesigned GNNs for graph anomaly detection under semi-supervised label guidance, how to address the adverse effects of graph anomalies on GNNs in unsupervised scenarios and learn effective representations for anomaly detection are still under-explored. To bridge this gap, in this paper, we propose a simple yet effective framework for Guarding Graph Neural Networks for Unsupervised Graph Anomaly Detection (G3AD). Specifically, G3AD introduces two auxiliary networks along with correlation constraints to guard the GNNs from inconsistent information encoding. Furthermore, G3AD introduces an adaptive caching module to guard the GNNs from solely reconstructing the observed data that contains anomalies. Extensive experiments demonstrate that our proposed G3AD can outperform seventeen state-of-the-art methods on both synthetic and real-world datasets.
Problem

Research questions and friction points this paper is trying to address.

Unsupervised graph anomaly detection without labels
Negative impact of anomalies on GNN representations
Lack of methods to guard GNNs in unsupervised scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses auxiliary networks with correlation constraints
Introduces adaptive caching module
Guards GNNs from inconsistent information encoding
๐Ÿ”Ž Similar Papers
Y
Yuan-Qi Bei
College of Computer Science and Technology, Zhejiang University, Hangzhou, China
S
Sheng Zhou
School of Software Technology, Zhejiang University, Hangzhou, China
J
Jinke Shi
College of Computer Science and Technology, Zhejiang University, Hangzhou, China
Y
Yao Ma
Department of Computer Science, Rensselaer Polytechnic Institute, New York, USA
Haishuai Wang
Haishuai Wang
Harvard University
Data MiningMachine Learning
J
Jiajun Bu
College of Computer Science and Technology, Zhejiang University, Hangzhou, China