🤖 AI Summary
This work addresses the challenge of defending against backdoor attacks on time-series classification (TSC) models under data-unavailable training scenarios. We propose a pseudo-data-driven two-stage training framework: Stage I generates high-quality adversarial pseudo-datasets from external data to enable efficient, original-training-data-free backdoor injection; Stage II jointly optimizes via logits alignment loss and frozen batch normalization layers to achieve high attack success rates while preserving clean-sample generalization. To our knowledge, this is the first approach enabling controllable, data-agnostic backdoor implantation in TSC, augmented with a defensive forgetting strategy for enhanced robustness. Evaluated across four UCR benchmark categories and five trigger types, our method achieves an average attack success rate >92% with <1.5% clean accuracy degradation. The proposed defense reduces attack success rates to <8% without compromising original model performance.
📝 Abstract
Time Series Classification (TSC) is highly vulnerable to backdoor attacks, posing significant security threats. Existing methods primarily focus on data poisoning during the training phase, designing sophisticated triggers to improve stealthiness and attack success rate (ASR). However, in practical scenarios, attackers often face restrictions in accessing training data. Moreover, it is a challenge for the model to maintain generalization ability on clean test data while remaining vulnerable to poisoned inputs when data is inaccessible. To address these challenges, we propose TrojanTime, a novel two-step training algorithm. In the first stage, we generate a pseudo-dataset using an external arbitrary dataset through target adversarial attacks. The clean model is then continually trained on this pseudo-dataset and its poisoned version. To ensure generalization ability, the second stage employs a carefully designed training strategy, combining logits alignment and batch norm freezing. We evaluate TrojanTime using five types of triggers across four TSC architectures in UCR benchmark datasets from diverse domains. The results demonstrate the effectiveness of TrojanTime in executing backdoor attacks while maintaining clean accuracy. Finally, to mitigate this threat, we propose a defensive unlearning strategy that effectively reduces the ASR while preserving clean accuracy.