🤖 AI Summary
Unsupervised graph-level anomaly detection (UGAD) faces a practical challenge where training graphs often contain anomalies, leading to distorted graph representations. Method: This paper proposes a robust graph representation learning framework featuring (i) an encoder-anchor alignment denoising mechanism that selectively aggregates high-informative normal-node features to suppress anomaly-induced noise; (ii) adversarial training coupled with dual decoders—respectively reconstructing node attributes and graph topology—to jointly optimize the encoder; and (iii) contrastive learning to tighten representations of normal graphs while pushing anomalous ones apart. Contribution/Results: Extensive experiments on eight real-world benchmark datasets demonstrate that the method achieves strong robustness against varying levels of anomaly contamination in training data and significantly outperforms state-of-the-art unsupervised UGAD approaches, providing an effective solution for noisy-graph anomaly detection in practical applications.
📝 Abstract
With the rapid growth of graph-structured data in critical domains, unsupervised graph-level anomaly detection (UGAD) has become a pivotal task. UGAD seeks to identify entire graphs that deviate from normal behavioral patterns. However, most Graph Neural Network (GNN) approaches implicitly assume that the training set is clean, containing only normal graphs, which is rarely true in practice. Even modest contamination by anomalous graphs can distort learned representations and sharply degrade performance. To address this challenge, we propose DeNoise, a robust UGAD framework explicitly designed for contaminated training data. It jointly optimizes a graph-level encoder, an attribute decoder, and a structure decoder via an adversarial objective to learn noise-resistant embeddings. Further, DeNoise introduces an encoder anchor-alignment denoising mechanism that fuses high-information node embeddings from normal graphs into all graph embeddings, improving representation quality while suppressing anomaly interference. A contrastive learning component then compacts normal graph embeddings and repels anomalous ones in the latent space. Extensive experiments on eight real-world datasets demonstrate that DeNoise consistently learns reliable graph-level representations under varying noise intensities and significantly outperforms state-of-the-art UGAD baselines.