🤖 AI Summary
This work addresses the challenge of anomaly detection in dynamic graphs, where labeled anomalies are scarce, leading to weak discriminability in unsupervised methods and limited generalization in semi-supervised approaches. To overcome this, the authors propose a model-agnostic framework that encodes residual representations to capture deviations between current interactions and historical context. They introduce a concentric hypersphere-based constrained loss to confine normal sample representations within a bounded region and employ a normalized flow-based dual-boundary optimization strategy to model the likelihood distribution of normal data. By leveraging limited labeled data while preserving strong generalization to unseen anomalies, the method consistently outperforms existing approaches across multiple evaluation settings, demonstrating its effectiveness, robustness, and balanced trade-off between discrimination and generalization.
📝 Abstract
Dynamic graph anomaly detection (DGAD) is critical for many real-world applications but remains challenging due to the scarcity of labeled anomalies. Existing methods are either unsupervised or semi-supervised: unsupervised methods avoid the need for labeled anomalies but often produce ambiguous boundary, whereas semi-supervised methods can overfit to the limited labeled anomalies and generalize poorly to unseen anomalies. To address this gap, we consider a largely underexplored problem in DGAD: learning a discriminative boundary from normal/unlabeled data, while leveraging limited labeled anomalies \textbf{when available} without sacrificing generalization to unseen anomalies. To this end, we propose an effective, generalizable, and model-agnostic framework with three main components: (i) residual representation encoding that capture deviations between current interactions and their historical context, providing anomaly-relevant signals; (ii) a restriction loss that constrain the normal representations within an interval bounded by two co-centered hyperspheres, ensuring consistent scales while keeping anomalies separable; (iii) a bi-boundary optimization strategy that learns a discriminative and robust boundary using the normal log-likelihood distribution modeled by a normalizing flow. Extensive experiments demonstrate the superiority of our framework across diverse evaluation settings.