🤖 AI Summary
This study systematically investigates the threat of adversarial attacks to multivariate time-series forecasting models in intelligent infrastructure scenarios. We employ white-box FGSM and BIM attacks to poison training data, revealing a stealthy degradation mechanism wherein imperceptible perturbations induce substantial prediction accuracy loss. Crucially, we empirically validate the cross-domain transferability of both attacks and defenses—from power load forecasting to hard disk failure prediction. To enhance robustness, we propose adversarial training and model hardening strategies specifically tailored for time-series forecasting. Evaluated on real-world power load and hard disk failure datasets, our approach reduces RMSE by 72.41% and 94.81%, respectively, significantly improving model reliability. This work represents the first systematic exploration of adversarial robustness in cross-domain multivariate time-series forecasting, providing both theoretical foundations and practical methodologies for deploying trustworthy AI in critical infrastructure systems.
📝 Abstract
The emergence of deep learning models has revolutionized various industries over the last decade, leading to a surge in connected devices and infrastructures. However, these models can be tricked into making incorrect predictions with high confidence, leading to disastrous failures and security concerns. To this end, we explore the impact of adversarial attacks on multivariate time-series forecasting and investigate methods to counter them. Specifically, we employ untargeted white-box attacks, namely the Fast Gradient Sign Method (FGSM) and the Basic Iterative Method (BIM), to poison the inputs to the training process, effectively misleading the model. We also illustrate the subtle modifications to the inputs after the attack, which makes detecting the attack using the naked eye quite difficult. Having demonstrated the feasibility of these attacks, we develop robust models through adversarial training and model hardening. We are among the first to showcase the transferability of these attacks and defenses by extrapolating our work from the benchmark electricity data to a larger, 10-year real-world data used for predicting the time-to-failure of hard disks. Our experimental results confirm that the attacks and defenses achieve the desired security thresholds, leading to a 72.41% and 94.81% decrease in RMSE for the electricity and hard disk datasets respectively after implementing the adversarial defenses.