In-context Pre-trained Time-Series Foundation Models adapt to Unseen Tasks

πŸ“… 2025-11-10
πŸ›οΈ International Conference on Information and Knowledge Management
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes a novel approach to enhance the generalization of time series foundation models on unseen tasks by integrating in-context learning (ICL) into the pretraining phase. Unlike conventional methods that rely on task-specific fine-tuning, the proposed framework reformulates training data and incorporates prompt engineering to enable the model to dynamically adapt to new tasks at inference time solely through input–output examples, without any parameter updates. This zero-shot adaptation capability significantly improves performance across multiple benchmarks, yielding an average gain of approximately 11.4% over state-of-the-art models. The results demonstrate that embedding ICL into time series pretraining substantially boosts both the versatility and practical utility of foundation models in real-world scenarios where labeled data for downstream tasks may be scarce or unavailable.

Technology Category

Application Category

πŸ“ Abstract
Time-series foundation models (TSFMs) have demonstrated strong generalization capabilities across diverse datasets and tasks. However, existing foundation models are typically pre-trained to enhance performance on specific tasks and often struggle to generalize to unseen tasks without fine-tuning. To address this limitation, we propose augmenting TSFMs with In-Context Learning (ICL) capabilities, enabling them to perform test-time inference by dynamically adapting to input-output relationships provided within the context. Our framework, In-Context Time-series Pre-training (ICTP), restructures the original pre-training data to equip the backbone TSFM with ICL capabilities, enabling adaptation to unseen tasks. Experiments demonstrate that ICT improves the performance of state-of-the-art TSFMs by approximately 11.4% on unseen tasks without requiring fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

time-series foundation models
unseen tasks
generalization
in-context learning
pre-training
Innovation

Methods, ideas, or system contributions that make the work stand out.

In-Context Learning
Time-series Foundation Models
Task Generalization
Pre-training
Zero-shot Adaptation
πŸ”Ž Similar Papers
No similar papers found.