🤖 AI Summary
Financial time-series forecasting faces challenges including weak generalization and poor domain adaptability due to high noise, non-stationarity, and market heterogeneity. This paper presents the first systematic empirical study evaluating Time-Series Foundation Models (TSFMs) across global multi-market environments, comparing zero-shot inference, fine-tuning, and domain-specific pretraining from scratch—augmented with synthetic data generation and hyperparameter optimization. Experiments leverage a large-scale daily excess return dataset. Results show that, under sufficient pretraining, domain-specific pretraining substantially outperforms generic pretraining in both predictive accuracy and economic value. Key performance drivers are training data scale, domain alignment, and optimization strategy. The study provides methodological guidance and empirical evidence for deploying TSFMs in trading decision-making and risk management.
📝 Abstract
Financial time series forecasting is central to trading, portfolio optimization, and risk management, yet it remains challenging due to noisy, non-stationary, and heterogeneous data. Recent advances in time series foundation models (TSFMs), inspired by large language models, offer a new paradigm for learning generalizable temporal representations from large and diverse datasets. This paper presents the first comprehensive empirical study of TSFMs in global financial markets. Using a large-scale dataset of daily excess returns across diverse markets, we evaluate zero-shot inference, fine-tuning, and pre-training from scratch against strong benchmark models. We find that off-the-shelf pre-trained TSFMs perform poorly in zero-shot and fine-tuning settings, whereas models pre-trained from scratch on financial data achieve substantial forecasting and economic improvements, underscoring the value of domain-specific adaptation. Increasing the dataset size, incorporating synthetic data augmentation, and applying hyperparameter tuning further enhance performance.