BadTime: An Effective Backdoor Attack on Multivariate Long-Term Time Series Forecasting

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work presents the first systematic investigation into the backdoor vulnerability of multivariate long-term time series forecasting (MLTSF) models. To address the lack of dedicated backdoor attacks for MLTSF, we propose BadTime—the first efficient and stealthy backdoor framework—featuring contrastive-guided sample selection, graph attention networks to identify critical variables, lag-based timing analysis to pinpoint trigger instants, and a jigsaw-style distributed trigger structure. BadTime jointly optimizes trigger patterns and model parameters during data poisoning and customized training. Extensive experiments across climate and financial forecasting tasks demonstrate that BadTime reduces target-variable MAE by over 50% compared to state-of-the-art methods, while improving stealthiness by more than threefold. These results significantly breach the security boundary of existing MLTSF models, revealing critical vulnerabilities previously unexplored in time series forecasting.

Technology Category

Application Category

📝 Abstract
Multivariate Long-Term Time Series Forecasting (MLTSF) models are increasingly deployed in critical domains such as climate, finance, and transportation. Although a variety of powerful MLTSF models have been proposed to improve predictive performance, the robustness of MLTSF models against malicious backdoor attacks remains entirely unexplored, which is crucial to ensuring their reliable and trustworthy deployment. To address this gap, we conduct an in-depth study on backdoor attacks against MLTSF models and propose the first effective attack method named BadTime. BadTime executes a backdoor attack by poisoning training data and customizing the backdoor training process. During data poisoning, BadTime proposes a contrast-guided strategy to select the most suitable training samples for poisoning, then employs a graph attention network to identify influential variables for trigger injection. Subsequently, BadTime further localizes optimal positions for trigger injection based on lag analysis and proposes a puzzle-like trigger structure that distributes the trigger across multiple poisoned variables to jointly steer the prediction of the target variable. During backdoor training, BadTime alternately optimizes the model and triggers via proposed tailored optimization objectives. Extensive experiments show that BadTime significantly outperforms state-of-the-art (SOTA) backdoor attacks on time series forecasting by reducing MAE by over 50% on target variables and boosting stealthiness by more than 3 times.
Problem

Research questions and friction points this paper is trying to address.

Explores backdoor attacks on multivariate long-term time series forecasting models
Proposes BadTime method for effective data poisoning and trigger injection
Enhances attack stealthiness and reduces prediction errors on target variables
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrast-guided strategy for poisoning samples
Graph attention network for trigger injection
Puzzle-like trigger structure for stealthiness
🔎 Similar Papers
No similar papers found.