๐ค AI Summary
In unsupervised graph anomaly detection, anomalies corrupt graph neural network (GNN) representation learning, degrading detection performance. Method: We propose an anomaly-robust GNN learning framework featuring (1) dual auxiliary networks with neighborhood correlation constraints to suppress inconsistent information encoding induced by anomalous neighbors, and (2) an adaptive caching module that dynamically avoids overfitting to anomaly-contaminated observations during unsupervised reconstruction. Contribution/Results: To our knowledge, this is the first work to systematically โshieldโ GNNs from anomaly contamination under fully unsupervised settings. By integrating correlation regularization with cache-augmented contrastive reconstruction, our method consistently outperforms 17 state-of-the-art baselines across synthetic and real-world graph datasets, achieving significant gains in both detection accuracy and robustness.
๐ Abstract
Unsupervised graph anomaly detection aims at identifying rare patterns that deviate from the majority in a graph without the aid of labels, which is important for a variety of real-world applications. Recent advances have utilized Graph Neural Networks (GNNs) to learn effective node representations by aggregating information from neighborhoods. This is motivated by the hypothesis that nodes in the graph tend to exhibit consistent behaviors with their neighborhoods. However, such consistency can be disrupted by graph anomalies in multiple ways. Most existing methods directly employ GNNs to learn representations, disregarding the negative impact of graph anomalies on GNNs, resulting in sub-optimal node representations and anomaly detection performance. While a few recent approaches have redesigned GNNs for graph anomaly detection under semi-supervised label guidance, how to address the adverse effects of graph anomalies on GNNs in unsupervised scenarios and learn effective representations for anomaly detection are still under-explored. To bridge this gap, in this paper, we propose a simple yet effective framework for Guarding Graph Neural Networks for Unsupervised Graph Anomaly Detection (G3AD). Specifically, G3AD introduces two auxiliary networks along with correlation constraints to guard the GNNs from inconsistent information encoding. Furthermore, G3AD introduces an adaptive caching module to guard the GNNs from solely reconstructing the observed data that contains anomalies. Extensive experiments demonstrate that our proposed G3AD can outperform seventeen state-of-the-art methods on both synthetic and real-world datasets.