How Foundational are Foundation Models for Time Series Forecasting?

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Foundation models (FMs) are increasingly applied to time series forecasting, yet their generalizability across diverse, domain-heterogeneous time series remains poorly understood. Method: This work systematically evaluates FM applicability through cross-domain zero-shot forecasting and fine-tuning experiments, explicitly assessing how pretraining data domain affects out-of-distribution generalization. Contribution/Results: (1) Zero-shot performance is highly sensitive to pretraining domain and degrades sharply on unseen real-world scenarios; (2) After fine-tuning, large FMs fail to consistently outperform lightweight, task-specific models and incur substantially higher computational costs; (3) The widely held “larger models are universally better” assumption is empirically invalidated for time series forecasting, leading to the principle that “domain alignment takes precedence over scale expansion.” These findings provide empirical grounding and methodological guidance for designing practical time-series foundation models—highlighting domain specificity, efficiency trade-offs, and the limits of transferability in sequential forecasting.

Technology Category

Application Category

📝 Abstract
Foundation Models are designed to serve as versatile embedding machines, with strong zero shot capabilities and superior generalization performance when fine-tuned on diverse downstream tasks. While this is largely true for language and vision foundation models, we argue that the inherent diversity of time series data makes them less suited for building effective foundation models. We demonstrate this using forecasting as our downstream task. We show that the zero-shot capabilities of a time series foundation model are significantly influenced and tied to the specific domains it has been pretrained on. Furthermore, when applied to unseen real-world time series data, fine-tuned foundation models do not consistently yield substantially better results, relative to their increased parameter count and memory footprint, than smaller, dedicated models tailored to the specific forecasting task at hand.
Problem

Research questions and friction points this paper is trying to address.

Assessing foundation models' suitability for time series forecasting tasks
Evaluating zero-shot capability dependence on pretraining data domains
Comparing fine-tuned foundation models against specialized forecasting approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Foundation models show limited zero-shot transfer for time series
Fine-tuned foundation models underperform specialized smaller models
Time series diversity challenges foundation model generalization
🔎 Similar Papers
No similar papers found.
N
Nouha Karaouli
Univ. Rennes, CNRS, Inria, IRISA - UMR 6074, F-35000 Rennes, France
Denis Coquenet
Denis Coquenet
Associate Professor, Rennes University
Deep LearningComputer Vision
Elisa Fromont
Elisa Fromont
Professor, Université de Rennes, France
Data MiningMachine LearningComputer VisionTime Series Analysis
M
Martial Mermillod
Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
M
Marina Reyboz
Univ. Grenoble Alpes, CEA, LIST, 38000 Grenoble, France