🤖 AI Summary
Deep time-series forecasting is highly susceptible to noise and anomalies, leading to overfitting and degraded generalization. To address this, we propose a selective learning strategy that abandons uniform optimization across all time steps and instead dynamically identifies high-confidence, generalizable steps for training. Our core innovation is a dual-mask mechanism: an uncertainty mask derived from residual entropy and an anomaly mask generated via residual lower-bound estimation—jointly filtering unreliable time steps. The method is modular and seamlessly integrated into mainstream deep models (e.g., Informer, TimesNet, iTransformer), requiring only lightweight components for end-to-end optimization. Extensive evaluation across eight real-world datasets demonstrates substantial improvements in robustness and accuracy: MSE reductions range from 6.5% to 37.4% over state-of-the-art baselines, with Informer achieving the largest gain of 37.4%.
📝 Abstract
Benefiting from high capacity for capturing complex temporal patterns, deep learning (DL) has significantly advanced time series forecasting (TSF). However, deep models tend to suffer from severe overfitting due to the inherent vulnerability of time series to noise and anomalies. The prevailing DL paradigm uniformly optimizes all timesteps through the MSE loss and learns those uncertain and anomalous timesteps without difference, ultimately resulting in overfitting. To address this, we propose a novel selective learning strategy for deep TSF. Specifically, selective learning screens a subset of the whole timesteps to calculate the MSE loss in optimization, guiding the model to focus on generalizable timesteps while disregarding non-generalizable ones. Our framework introduces a dual-mask mechanism to target timesteps: (1) an uncertainty mask leveraging residual entropy to filter uncertain timesteps, and (2) an anomaly mask employing residual lower bound estimation to exclude anomalous timesteps. Extensive experiments across eight real-world datasets demonstrate that selective learning can significantly improve the predictive performance for typical state-of-the-art deep models, including 37.4% MSE reduction for Informer, 8.4% for TimesNet, and 6.5% for iTransformer.