🤖 AI Summary
This work exposes a critical vulnerability of Transformer-based time series forecasting models in the frequency domain. Addressing the limitations of existing adversarial attacks—which operate solely in the time domain and generalize poorly to regression tasks—we propose the first targeted adversarial attack framework integrating both time- and frequency-domain losses. Built upon the Carlini & Wagner (CW) optimization paradigm, our method embeds the Fourier transform directly into the loss function to jointly minimize time-domain prediction error and frequency-domain spectral distortion. Crucially, it is the first to systematically incorporate frequency-domain priors to guide adversarial perturbation generation. Evaluated across multiple benchmark datasets, the attack achieves high efficacy, degrading model accuracy by up to 3.2× in mean absolute error (MAE). This not only reveals an inherent structural weakness of time series models in the frequency domain but also fills a fundamental gap in frequency-aware adversarial robustness research for time series forecasting.
📝 Abstract
Transformer-based models have made significant progress in time series forecasting. However, a key limitation of deep learning models is their susceptibility to adversarial attacks, which has not been studied enough in the context of time series prediction. In contrast to areas such as computer vision, where adversarial robustness has been extensively studied, frequency domain features of time series data play an important role in the prediction task but have not been sufficiently explored in terms of adversarial attacks. This paper proposes a time series prediction attack algorithm based on frequency domain loss. Specifically, we adapt an attack method originally designed for classification tasks to the prediction field and optimize the adversarial samples using both time-domain and frequency-domain losses. To the best of our knowledge, there is no relevant research on using frequency information for time-series adversarial attacks. Our experimental results show that these current time series prediction models are vulnerable to adversarial attacks, and our approach achieves excellent performance on major time series forecasting datasets.