🤖 AI Summary
To address performance degradation in pretrained models for time-series forecasting caused by objective conflicts in multi-target learning, this paper proposes a plug-and-play non-cooperative calibration strategy (SoP: Socket+Plug). The method freezes the backbone network (Socket) and fine-tunes only lightweight, task-specific Plug modules. Crucially, each prediction horizon is assigned an independent optimizer and early-stopping mechanism, enabling effective decoupling and adaptive calibration across multiple objectives. SoP requires no backbone retraining, is model-agnostic, and incurs minimal computational overhead. Extensive experiments on multiple standard benchmarks and the ERA5 meteorological dataset demonstrate up to a 22% improvement in forecasting accuracy. Notably, even when instantiated with simple MLPs as Plug modules, SoP achieves competitive performance—validating its strong generalizability and practical utility.
📝 Abstract
Deep learning-based approaches have demonstrated significant advancements in time series forecasting. Despite these ongoing developments, the complex dynamics of time series make it challenging to establish the rule of thumb for designing the golden model architecture. In this study, we argue that refining existing advanced models through a universal calibrating strategy can deliver substantial benefits with minimal resource costs, as opposed to elaborating and training a new model from scratch. We first identify a multi-target learning conflict in the calibrating process, which arises when optimizing variables across time steps, leading to the underutilization of the model's learning capabilities. To address this issue, we propose an innovative calibrating strategy called Socket+Plug (SoP). This approach retains an exclusive optimizer and early-stopping monitor for each predicted target within each Plug while keeping the fully trained Socket backbone frozen. The model-agnostic nature of SoP allows it to directly calibrate the performance of any trained deep forecasting models, regardless of their specific architectures. Extensive experiments on various time series benchmarks and a spatio-temporal meteorological ERA5 dataset demonstrate the effectiveness of SoP, achieving up to a 22% improvement even when employing a simple MLP as the Plug (highlighted in Figure 1)