FreeGAD: A Training-Free yet Effective Approach for Graph Anomaly Detection

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph anomaly detection (GAD) suffers from high training costs and poor scalability of deep learning models. This paper proposes the first fully training-free GAD framework: it generates anomaly-aware node representations via an affinity-gated residual encoder and computes anomaly scores directly using anchor-guided statistical deviation measures—bypassing parameter learning and iterative optimization entirely. The core insight is that deep model training contributes minimally to final detection performance; leveraging this, we design a zero-training paradigm. Evaluated on multiple cross-domain benchmark datasets, our method surpasses state-of-the-art (SOTA) approaches in detection accuracy while reducing computational overhead by one to two orders of magnitude. This yields significant improvements in efficiency and scalability, establishing a lightweight, robust new paradigm for large-scale GAD.

Technology Category

Application Category

📝 Abstract
Graph Anomaly Detection (GAD) aims to identify nodes that deviate from the majority within a graph, playing a crucial role in applications such as social networks and e-commerce. Despite the current advancements in deep learning-based GAD, existing approaches often suffer from high deployment costs and poor scalability due to their complex and resource-intensive training processes. Surprisingly, our empirical findings suggest that the training phase of deep GAD methods, commonly perceived as crucial, may actually contribute less to anomaly detection performance than expected. Inspired by this, we propose FreeGAD, a novel training-free yet effective GAD method. Specifically, it leverages an affinity-gated residual encoder to generate anomaly-aware representations. Meanwhile, FreeGAD identifies anchor nodes as pseudo-normal and anomalous guides, followed by calculating anomaly scores through anchor-guided statistical deviations. Extensive experiments demonstrate that FreeGAD achieves superior anomaly detection performance, efficiency, and scalability on multiple benchmark datasets from diverse domains, without any training or iterative optimization.
Problem

Research questions and friction points this paper is trying to address.

Identifies graph nodes deviating from majority without training
Reduces deployment costs and improves scalability in anomaly detection
Uses anchor-guided statistical deviations for efficient anomaly scoring
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free graph anomaly detection method
Affinity-gated residual encoder for representations
Anchor-guided statistical deviation scoring
🔎 Similar Papers
Yunfeng Zhao
Yunfeng Zhao
Tianjin University
Edge computing
Y
Yixin Liu
Griffith University, Gold Coast, Australia
S
Shiyuan Li
Guangxi University, Nanning, China
Q
Qingfeng Chen
Guangxi University, Nanning, China
Y
Yu Zheng
Griffith University, Gold Coast, Australia
Shirui Pan
Shirui Pan
Professor, ARC Future Fellow, FQA, Director of TrustAGI Lab, Griffith University
Data MiningMachine LearningGraph Neural NetworksTrustworthy AITime Series