🤖 AI Summary
Time-series anomaly detection faces challenges including diverse anomaly patterns, scarce labeled data, and insufficient representation robustness due to strong prior assumptions. To address these, we propose a differential-augmented unsupervised contrastive representation learning framework. First, differential transformation is employed to construct positive temporal sample pairs, mitigating the sparsity of anomalies. Second, a negative-free contrastive loss based on KL divergence is introduced, coupled with a stop-gradient mechanism to explicitly enforce the model to focus on normal pattern modeling and avoid reconstruction bias. Third, a Transformer architecture is integrated for joint spatiotemporal modeling. Extensive experiments demonstrate that our method achieves significant improvements over nine state-of-the-art baselines across five public benchmark datasets. The source code is publicly available.
📝 Abstract
Time series anomaly detection holds notable importance for risk identification and fault detection across diverse application domains. Unsupervised learning methods have become popular because they have no requirement for labels. However, due to the challenges posed by the multiplicity of abnormal patterns, the sparsity of anomalies, and the growth of data scale and complexity, these methods often fail to capture robust and representative dependencies within the time series for identifying anomalies. To enhance the ability of models to capture normal patterns of time series and avoid the retrogression of modeling ability triggered by the dependencies on high-quality prior knowledge, we propose a differencing-based contrastive representation learning framework for time series anomaly detection (DConAD). Specifically, DConAD generates differential data to provide additional information about time series and utilizes transformer-based architecture to capture spatiotemporal dependencies, which enhances the robustness of unbiased representation learning ability. Furthermore, DConAD implements a novel KL divergence-based contrastive learning paradigm that only uses positive samples to avoid deviation from reconstruction and deploys the stop-gradient strategy to compel convergence. Extensive experiments on five public datasets show the superiority and effectiveness of DConAD compared with nine baselines. The code is available at https://github.com/shaieesss/DConAD.