DConAD: A Differencing-based Contrastive Representation Learning Framework for Time Series Anomaly Detection

📅 2025-04-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Time-series anomaly detection faces challenges including diverse anomaly patterns, scarce labeled data, and insufficient representation robustness due to strong prior assumptions. To address these, we propose a differential-augmented unsupervised contrastive representation learning framework. First, differential transformation is employed to construct positive temporal sample pairs, mitigating the sparsity of anomalies. Second, a negative-free contrastive loss based on KL divergence is introduced, coupled with a stop-gradient mechanism to explicitly enforce the model to focus on normal pattern modeling and avoid reconstruction bias. Third, a Transformer architecture is integrated for joint spatiotemporal modeling. Extensive experiments demonstrate that our method achieves significant improvements over nine state-of-the-art baselines across five public benchmark datasets. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Time series anomaly detection holds notable importance for risk identification and fault detection across diverse application domains. Unsupervised learning methods have become popular because they have no requirement for labels. However, due to the challenges posed by the multiplicity of abnormal patterns, the sparsity of anomalies, and the growth of data scale and complexity, these methods often fail to capture robust and representative dependencies within the time series for identifying anomalies. To enhance the ability of models to capture normal patterns of time series and avoid the retrogression of modeling ability triggered by the dependencies on high-quality prior knowledge, we propose a differencing-based contrastive representation learning framework for time series anomaly detection (DConAD). Specifically, DConAD generates differential data to provide additional information about time series and utilizes transformer-based architecture to capture spatiotemporal dependencies, which enhances the robustness of unbiased representation learning ability. Furthermore, DConAD implements a novel KL divergence-based contrastive learning paradigm that only uses positive samples to avoid deviation from reconstruction and deploys the stop-gradient strategy to compel convergence. Extensive experiments on five public datasets show the superiority and effectiveness of DConAD compared with nine baselines. The code is available at https://github.com/shaieesss/DConAD.
Problem

Research questions and friction points this paper is trying to address.

Detect anomalies in time series without labeled data
Address challenges of diverse anomalies and data complexity
Improve robustness in capturing normal time series patterns
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differencing-based contrastive learning for anomaly detection
Transformer architecture captures spatiotemporal dependencies
KL divergence contrastive learning with positive samples
W
Wenxin Zhang
University of Chinese Academy of Science, Beijing, China
X
Xiaojian Lin
Tsinghua Universit, Beijing, China
Wenjun Yu
Wenjun Yu
Shanghai University of International Business and Economics
Time Series Analysis
G
Guangzhen Yao
Northeast Normal University, Changchun, China
J
jingxiang Zhong
Fuzhou University, Fuzhou, China
Y
Yu Li
Hubei University, Wuhan, China
Renda Han
Renda Han
Institute of Electrical and Electronics Engineers
Computer Vision
S
Songcheng Xu
Northeastern University, Shenyang, China
H
Hao Shi
University of Chinese Academy of Science, Beijing, China
C
Cuicui Luo
University of Chinese Academy of Science, Beijing, China