Adversarial Attacks and Defenses in Multivariate Time-Series Forecasting for Smart and Connected Infrastructures

📅 2024-08-27
🏛️ Annual Conference of the PHM Society
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates the threat of adversarial attacks to multivariate time-series forecasting models in intelligent infrastructure scenarios. We employ white-box FGSM and BIM attacks to poison training data, revealing a stealthy degradation mechanism wherein imperceptible perturbations induce substantial prediction accuracy loss. Crucially, we empirically validate the cross-domain transferability of both attacks and defenses—from power load forecasting to hard disk failure prediction. To enhance robustness, we propose adversarial training and model hardening strategies specifically tailored for time-series forecasting. Evaluated on real-world power load and hard disk failure datasets, our approach reduces RMSE by 72.41% and 94.81%, respectively, significantly improving model reliability. This work represents the first systematic exploration of adversarial robustness in cross-domain multivariate time-series forecasting, providing both theoretical foundations and practical methodologies for deploying trustworthy AI in critical infrastructure systems.

Technology Category

Application Category

📝 Abstract
The emergence of deep learning models has revolutionized various industries over the last decade, leading to a surge in connected devices and infrastructures. However, these models can be tricked into making incorrect predictions with high confidence, leading to disastrous failures and security concerns. To this end, we explore the impact of adversarial attacks on multivariate time-series forecasting and investigate methods to counter them. Specifically, we employ untargeted white-box attacks, namely the Fast Gradient Sign Method (FGSM) and the Basic Iterative Method (BIM), to poison the inputs to the training process, effectively misleading the model. We also illustrate the subtle modifications to the inputs after the attack, which makes detecting the attack using the naked eye quite difficult. Having demonstrated the feasibility of these attacks, we develop robust models through adversarial training and model hardening. We are among the first to showcase the transferability of these attacks and defenses by extrapolating our work from the benchmark electricity data to a larger, 10-year real-world data used for predicting the time-to-failure of hard disks. Our experimental results confirm that the attacks and defenses achieve the desired security thresholds, leading to a 72.41% and 94.81% decrease in RMSE for the electricity and hard disk datasets respectively after implementing the adversarial defenses.
Problem

Research questions and friction points this paper is trying to address.

Investigating adversarial attacks on multivariate time-series forecasting models
Developing defense methods against white-box attacks like FGSM and BIM
Evaluating attack and defense transferability across different infrastructure datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Used FGSM and BIM white-box adversarial attack methods
Implemented adversarial training for model hardening defense
Demonstrated attack transferability across electricity and hard disk datasets
🔎 Similar Papers
No similar papers found.
P
Pooja Krishan
Department of Computer Science, San José State University, San José, CA, 95192, USA
R
Rohan Mohapatra
Department of Computer Science, San José State University, San José, CA, 95192, USA
S
Saptarshi Sengupta
Department of Computer Science, San José State University, San José, CA, 95192, USA