🤖 AI Summary
Graph anomaly detection suffers from scarce and noisy labels, limiting the robustness of existing unsupervised reconstruction-based methods (e.g., GAE). To address this, we propose E-GAE—the first evidential learning framework for graph anomaly detection—integrating Dempster–Shafer theory with graph autoencoders. E-GAE explicitly models evidence distributions over both node features and topological structure, decoupling and jointly reasoning about structural uncertainty and reconstruction uncertainty, thereby mitigating overreliance on a single reconstruction error metric. The method operates in a fully unsupervised setting without requiring anomaly labels and provides calibrated, interpretable uncertainty estimates for predictions. Extensive experiments on multiple benchmark datasets demonstrate state-of-the-art performance, with significant improvements in robustness against feature noise and topological perturbations.
📝 Abstract
Graph anomaly detection faces significant challenges due to the scarcity of reliable anomaly-labeled datasets, driving the development of unsupervised methods. Graph autoencoders (GAEs) have emerged as a dominant approach by reconstructing graph structures and node features while deriving anomaly scores from reconstruction errors. However, relying solely on reconstruction error for anomaly detection has limitations, as it increases the sensitivity to noise and overfitting. To address these issues, we propose Graph Evidential Learning (GEL), a probabilistic framework that redefines the reconstruction process through evidential learning. By modeling node features and graph topology using evidential distributions, GEL quantifies two types of uncertainty: graph uncertainty and reconstruction uncertainty, incorporating them into the anomaly scoring mechanism. Extensive experiments demonstrate that GEL achieves state-of-the-art performance while maintaining high robustness against noise and structural perturbations.