π€ AI Summary
To address the limitations of graph neural networks (GNNs) in graph-level anomaly detection (GAD) caused by scarce labeled data and severe class imbalance, this paper proposes FracAugβa novel framework comprising three key components: (1) fractional graph augmentation, the first of its kind, which generates multi-scale topological variants while preserving semantic integrity; (2) a weighted distance-aware margin loss to sharpen decision boundaries between normal and anomalous graphs; and (3) a model-agnostic mutual-validation pseudo-labeling mechanism that enables unbiased pseudo-label generation and iterative self-training. FracAug requires no modification to backbone GNN architectures and is plug-and-play. Evaluated on 12 real-world datasets, it achieves average improvements of up to 5.72% in AUROC, 7.23% in AUPRC, and 4.18% in F1-score. It is compatible with 14 mainstream GNN models, significantly enhancing generalization and robustness under low-supervision settings.
π Abstract
Graph-level anomaly detection (GAD) is critical in diverse domains such as drug discovery, yet high labeling costs and dataset imbalance hamper the performance of Graph Neural Networks (GNNs). To address these issues, we propose FracAug, an innovative plug-in augmentation framework that enhances GNNs by generating semantically consistent graph variants and pseudo-labeling with mutual verification. Unlike previous heuristic methods, FracAug learns semantics within given graphs and synthesizes fractional variants, guided by a novel weighted distance-aware margin loss. This captures multi-scale topology to generate diverse, semantic-preserving graphs unaffected by data imbalance. Then, FracAug utilizes predictions from both original and augmented graphs to pseudo-label unlabeled data, iteratively expanding the training set. As a model-agnostic module compatible with various GNNs, FracAug demonstrates remarkable universality and efficacy: experiments across 14 GNNs on 12 real-world datasets show consistent gains, boosting average AUROC, AUPRC, and F1-score by up to 5.72%, 7.23%, and 4.18%, respectively.