🤖 AI Summary
Graph anomaly detection (GAD) suffers from high training costs and poor scalability of deep learning models. This paper proposes the first fully training-free GAD framework: it generates anomaly-aware node representations via an affinity-gated residual encoder and computes anomaly scores directly using anchor-guided statistical deviation measures—bypassing parameter learning and iterative optimization entirely. The core insight is that deep model training contributes minimally to final detection performance; leveraging this, we design a zero-training paradigm. Evaluated on multiple cross-domain benchmark datasets, our method surpasses state-of-the-art (SOTA) approaches in detection accuracy while reducing computational overhead by one to two orders of magnitude. This yields significant improvements in efficiency and scalability, establishing a lightweight, robust new paradigm for large-scale GAD.
📝 Abstract
Graph Anomaly Detection (GAD) aims to identify nodes that deviate from the majority within a graph, playing a crucial role in applications such as social networks and e-commerce. Despite the current advancements in deep learning-based GAD, existing approaches often suffer from high deployment costs and poor scalability due to their complex and resource-intensive training processes. Surprisingly, our empirical findings suggest that the training phase of deep GAD methods, commonly perceived as crucial, may actually contribute less to anomaly detection performance than expected. Inspired by this, we propose FreeGAD, a novel training-free yet effective GAD method. Specifically, it leverages an affinity-gated residual encoder to generate anomaly-aware representations. Meanwhile, FreeGAD identifies anchor nodes as pseudo-normal and anomalous guides, followed by calculating anomaly scores through anchor-guided statistical deviations. Extensive experiments demonstrate that FreeGAD achieves superior anomaly detection performance, efficiency, and scalability on multiple benchmark datasets from diverse domains, without any training or iterative optimization.