π€ AI Summary
This work addresses the limited robustness of existing deep time series models under noisy conditions and their difficulty in balancing effectiveness with efficiency. We propose DropoutTS, a model-agnostic plug-in mechanism that introduces a novel paradigm of sample-level adaptive regularization by focusing on βhow much to learnβ rather than βwhat to learn.β Our method estimates the noise level of each sample via reconstruction residuals and maps it to a dynamic dropout rate leveraging spectral sparsity, thereby suppressing spurious fluctuations while preserving critical temporal details. DropoutTS requires no modification to the underlying network architecture and incurs negligible additional parameters. Extensive experiments across diverse noise settings and public benchmarks demonstrate that DropoutTS significantly enhances the robustness and performance of mainstream time series models.
π Abstract
Deep time series models are vulnerable to noisy data ubiquitous in real-world applications. Existing robustness strategies either prune data or rely on costly prior quantification, failing to balance effectiveness and efficiency. In this paper, we introduce DropoutTS, a model-agnostic plugin that shifts the paradigm from"what"to learn to"how much"to learn. DropoutTS employs a Sample-Adaptive Dropout mechanism: leveraging spectral sparsity to efficiently quantify instance-level noise via reconstruction residuals, it dynamically calibrates model learning capacity by mapping noise to adaptive dropout rates - selectively suppressing spurious fluctuations while preserving fine-grained fidelity. Extensive experiments across diverse noise regimes and open benchmarks show DropoutTS consistently boosts superior backbones'performance, delivering advanced robustness with negligible parameter overhead and no architectural modifications. Our code is available at https://github.com/CityMind-Lab/DropoutTS.