Estimating Time Series Foundation Model Transferability via In-Context Learning

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Selecting appropriate time-series foundation models (TSFMs) under limited public data remains challenging due to the high cost of exhaustive fine-tuning. Method: This paper proposes TimeTic—the first framework to formulate transferability estimation as an in-context learning problem, predicting fine-tuned performance on unseen downstream tasks from source-data observations. It introduces a layer-wise entropy evolution–based universal model representation, enabling generalization across heterogeneous TSFM families, and constructs the first comprehensive benchmark comprising 10 diverse time-series datasets and 10 TSFMs. Crucially, it employs a tabular foundation model as the in-context learner, taking structured inputs composed of dataset meta-features, model attributes, and observed fine-tuning performance. Results: On unseen datasets, TimeTic achieves an average Spearman rank correlation of 0.6 with ground-truth fine-tuned performance—30% higher than zero-shot baselines—and significantly outperforms existing methods.

Technology Category

Application Category

📝 Abstract
Time series foundation models (TSFMs) offer strong zero-shot forecasting via large-scale pre-training, yet fine-tuning remains critical for boosting performance in domains with limited public data. With the growing number of TSFMs, efficiently identifying the best model for downstream fine-tuning becomes increasingly challenging. In this work, we introduce TimeTic, a transferability estimation framework that recasts model selection as an in-context-learning problem: given observations on known (source) datasets, it predicts how a TSFM will perform after fine-tuning on a downstream (target) dataset. TimeTic flexibly organizes the observed model-data relationships as contextual information, allowing it to adapt seamlessly to various test-time scenarios. Leveraging the natural tabular structure formed by dataset meta-features, model characteristics, and fine-tuned performance, we employ tabular foundation models to serve as in-context learners. We further introduce a novel model characterization based on entropy evolution across model layers, capturing embedding-space distinctions and enabling TimeTic to generalize across arbitrary model sets. We establish a comprehensive benchmark for transferability estimation including 10 datasets, 10 foundation models, and 3 forecasting tasks. On this benchmark, TimeTic's estimation demonstrates strong alignment with actual fine-tuned performance for previously unseen datasets, achieving a mean rank correlation of approximately 0.6 and a 30% improvement compared to using zero-shot performance as the transferability score.
Problem

Research questions and friction points this paper is trying to address.

Estimating transferability of time series foundation models
Selecting optimal models for downstream fine-tuning
Predicting post-fine-tuning performance via in-context learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses in-context learning for model selection
Employs tabular foundation models as learners
Introduces entropy evolution for model characterization
🔎 Similar Papers
No similar papers found.