🤖 AI Summary
To address the weak zero-shot generalization capability in long-horizon time series forecasting, this paper proposes a foundation model framework for time series prediction. Inspired by large language model paradigms, it employs self-supervised pretraining on large-scale heterogeneous time series data to learn general-purpose temporal representations. Crucially, it innovatively unifies point forecasting and probabilistic forecasting within a single modeling objective and introduces a lightweight fine-tuning strategy for downstream adaptation. This approach transcends conventional task-specific architectures, significantly enhancing zero-shot forecasting performance on unseen datasets. Experiments demonstrate that the fine-tuned model achieves an average 18.7% reduction in MAE on long-horizon forecasting tasks. Moreover, it exhibits strong cross-domain adaptability and practical utility across multi-source, multi-frequency, and multi-domain time series. The work establishes a novel paradigm for time series foundation model research.
📝 Abstract
Inspired by recent advances in large language models, foundation models have been developed for zero-shot time series forecasting, enabling prediction on datasets unseen during pretraining. These large-scale models, trained on vast collections of time series, learn generalizable representations for both point and probabilistic forecasting, reducing the need for task-specific architectures and manual tuning.
In this work, we review the main architectures, pretraining strategies, and optimization methods used in such models, and study the effect of fine-tuning after pretraining to enhance their performance on specific datasets. Our empirical results show that fine-tuning generally improves zero-shot forecasting capabilities, especially for long-term horizons.