FreCT: Frequency-augmented Convolutional Transformer for Robust Time Series Anomaly Detection

📅 2025-05-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In time-series anomaly detection, existing reconstruction-based methods suffer from representation distortion due to anomaly contamination and neglect frequency-domain characteristics. To address these limitations, we propose a convolutional Transformer architecture integrating frequency-domain enhancement with local topological preservation. Our key contributions are: (1) a novel time-frequency dual-domain consistency learning mechanism that jointly leverages Fast Fourier Transform (FFT) and a stop-gradient KL divergence constraint to enhance representation robustness; and (2) a patch-level contrastive generative framework coupled with an improved Conv-Transformer hybrid structure to jointly model temporal dependencies and spectral patterns. Evaluated on four public benchmarks, our method achieves an average 3.2% improvement in F1-score over state-of-the-art approaches and demonstrates strong robustness against noise and burst anomalies.

Technology Category

Application Category

📝 Abstract
Time series anomaly detection is critical for system monitoring and risk identification, across various domains, such as finance and healthcare. However, for most reconstruction-based approaches, detecting anomalies remains a challenge due to the complexity of sequential patterns in time series data. On the one hand, reconstruction-based techniques are susceptible to computational deviation stemming from anomalies, which can lead to impure representations of normal sequence patterns. On the other hand, they often focus on the time-domain dependencies of time series, while ignoring the alignment of frequency information beyond the time domain. To address these challenges, we propose a novel Frequency-augmented Convolutional Transformer (FreCT). FreCT utilizes patch operations to generate contrastive views and employs an improved Transformer architecture integrated with a convolution module to capture long-term dependencies while preserving local topology information. The introduced frequency analysis based on Fourier transformation could enhance the model's ability to capture crucial characteristics beyond the time domain. To protect the training quality from anomalies and improve the robustness, FreCT deploys stop-gradient Kullback-Leibler (KL) divergence and absolute error to optimize consistency information in both time and frequency domains. Extensive experiments on four public datasets demonstrate that FreCT outperforms existing methods in identifying anomalies.
Problem

Research questions and friction points this paper is trying to address.

Detecting anomalies in complex sequential time series data
Addressing computational deviation from anomalies in reconstruction-based methods
Incorporating frequency information beyond time-domain dependencies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Frequency-augmented Transformer with convolution module
Fourier transformation for frequency domain analysis
Stop-gradient KL divergence for robust training
🔎 Similar Papers
No similar papers found.
W
Wenxin Zhang
University of Chinese Academy of Science, Beijing, China
D
Ding Xu
Harbin Institute of Technology, Harbin, China
G
Guangzhen Yao
National University of Defense Technology, Changsha, China
X
Xiaojian Lin
Tsinghua University, Beijing, China
Renxiang Guan
Renxiang Guan
National University of Defense Technology
Deep graph clusteringHyperspectral image processin
C
Chengze Du
Beijing University of Posts and Telecommunications, Beijing, China
Renda Han
Renda Han
Institute of Electrical and Electronics Engineers
Computer Vision
Xi Xuan
Xi Xuan
PhD Student UEF - Computer Science; CityUHK - Computational Linguistics
Speech Signal ProcessingSpeaker RecognitionLarge Language ModelLegal AILaw
C
Cuicui Luo