DeNoise: Learning Robust Graph Representations for Unsupervised Graph-Level Anomaly Detection

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Unsupervised graph-level anomaly detection (UGAD) faces a practical challenge where training graphs often contain anomalies, leading to distorted graph representations. Method: This paper proposes a robust graph representation learning framework featuring (i) an encoder-anchor alignment denoising mechanism that selectively aggregates high-informative normal-node features to suppress anomaly-induced noise; (ii) adversarial training coupled with dual decoders—respectively reconstructing node attributes and graph topology—to jointly optimize the encoder; and (iii) contrastive learning to tighten representations of normal graphs while pushing anomalous ones apart. Contribution/Results: Extensive experiments on eight real-world benchmark datasets demonstrate that the method achieves strong robustness against varying levels of anomaly contamination in training data and significantly outperforms state-of-the-art unsupervised UGAD approaches, providing an effective solution for noisy-graph anomaly detection in practical applications.

Technology Category

Application Category

📝 Abstract
With the rapid growth of graph-structured data in critical domains, unsupervised graph-level anomaly detection (UGAD) has become a pivotal task. UGAD seeks to identify entire graphs that deviate from normal behavioral patterns. However, most Graph Neural Network (GNN) approaches implicitly assume that the training set is clean, containing only normal graphs, which is rarely true in practice. Even modest contamination by anomalous graphs can distort learned representations and sharply degrade performance. To address this challenge, we propose DeNoise, a robust UGAD framework explicitly designed for contaminated training data. It jointly optimizes a graph-level encoder, an attribute decoder, and a structure decoder via an adversarial objective to learn noise-resistant embeddings. Further, DeNoise introduces an encoder anchor-alignment denoising mechanism that fuses high-information node embeddings from normal graphs into all graph embeddings, improving representation quality while suppressing anomaly interference. A contrastive learning component then compacts normal graph embeddings and repels anomalous ones in the latent space. Extensive experiments on eight real-world datasets demonstrate that DeNoise consistently learns reliable graph-level representations under varying noise intensities and significantly outperforms state-of-the-art UGAD baselines.
Problem

Research questions and friction points this paper is trying to address.

Detect anomalous graphs in contaminated training data
Learn noise-resistant graph embeddings via adversarial optimization
Improve representation quality by suppressing anomaly interference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial objective for noise-resistant graph embeddings
Encoder anchor-alignment denoising mechanism
Contrastive learning compacts normal and repels anomalous embeddings
🔎 Similar Papers
No similar papers found.
Q
Qingfeng Chen
School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China
H
Haojin Zeng
School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China
J
Jingyi Jie
School of Computer Science and Technology, Hainan University, Haikou 570228, China
Shichao Zhang
Shichao Zhang
Guangxi Normal University
Big DataData underlying logicKNN
D
Debo Cheng
School of Computer Science and Technology, Hainan University, Haikou 570228, China