Pre-trained Forecasting Models: Strong Zero-Shot Feature Extractors for Time Series Classification

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work challenges the prevailing paradigm that time-series representation learning requires task-specific pretraining, investigating whether pretrained time-series forecasting models can serve as general-purpose feature extractors for time-series classification. Method: We evaluate zero-shot transfer of frozen forecasting models to classification tasks and propose two model-agnostic embedding enhancement strategies to facilitate cross-task representation reuse. Contribution/Results: Empirical results across multiple classification benchmarks show that the best-performing forecasting models achieve classification accuracy competitive with or superior to dedicated classification-pretrained models. Moreover, forecasting capability exhibits a strong positive correlation with downstream classification performance. This study establishes forecasting as an effective proxy task for learning transferable, efficient, and generalizable time-series representations—paving a new pathway toward universal time-series foundation models.

Technology Category

Application Category

📝 Abstract
Recent research on time series foundation models has primarily focused on forecasting, leaving it unclear how generalizable their learned representations are. In this study, we examine whether frozen pre-trained forecasting models can provide effective representations for classification. To this end, we compare different representation extraction strategies and introduce two model-agnostic embedding augmentations. Our experiments show that the best forecasting models achieve classification accuracy that matches or even surpasses that of state-of-the-art models pre-trained specifically for classification. Moreover, we observe a positive correlation between forecasting and classification performance. These findings challenge the assumption that task-specific pre-training is necessary, and suggest that learning to forecast may provide a powerful route toward constructing general-purpose time series foundation models.
Problem

Research questions and friction points this paper is trying to address.

Evaluating forecasting models' generalizability for time series classification tasks
Comparing representation extraction strategies for frozen pre-trained models
Investigating correlation between forecasting and classification performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Frozen pre-trained forecasting models extract features
Model-agnostic embedding augmentations enhance representations
Forecasting models achieve competitive classification accuracy
🔎 Similar Papers
No similar papers found.