🤖 AI Summary
This work challenges the prevailing paradigm that time-series representation learning requires task-specific pretraining, investigating whether pretrained time-series forecasting models can serve as general-purpose feature extractors for time-series classification. Method: We evaluate zero-shot transfer of frozen forecasting models to classification tasks and propose two model-agnostic embedding enhancement strategies to facilitate cross-task representation reuse. Contribution/Results: Empirical results across multiple classification benchmarks show that the best-performing forecasting models achieve classification accuracy competitive with or superior to dedicated classification-pretrained models. Moreover, forecasting capability exhibits a strong positive correlation with downstream classification performance. This study establishes forecasting as an effective proxy task for learning transferable, efficient, and generalizable time-series representations—paving a new pathway toward universal time-series foundation models.
📝 Abstract
Recent research on time series foundation models has primarily focused on forecasting, leaving it unclear how generalizable their learned representations are. In this study, we examine whether frozen pre-trained forecasting models can provide effective representations for classification. To this end, we compare different representation extraction strategies and introduce two model-agnostic embedding augmentations. Our experiments show that the best forecasting models achieve classification accuracy that matches or even surpasses that of state-of-the-art models pre-trained specifically for classification. Moreover, we observe a positive correlation between forecasting and classification performance. These findings challenge the assumption that task-specific pre-training is necessary, and suggest that learning to forecast may provide a powerful route toward constructing general-purpose time series foundation models.