π€ AI Summary
This work proposes a novel approach to enhance the generalization of time series foundation models on unseen tasks by integrating in-context learning (ICL) into the pretraining phase. Unlike conventional methods that rely on task-specific fine-tuning, the proposed framework reformulates training data and incorporates prompt engineering to enable the model to dynamically adapt to new tasks at inference time solely through inputβoutput examples, without any parameter updates. This zero-shot adaptation capability significantly improves performance across multiple benchmarks, yielding an average gain of approximately 11.4% over state-of-the-art models. The results demonstrate that embedding ICL into time series pretraining substantially boosts both the versatility and practical utility of foundation models in real-world scenarios where labeled data for downstream tasks may be scarce or unavailable.
π Abstract
Time-series foundation models (TSFMs) have demonstrated strong generalization capabilities across diverse datasets and tasks. However, existing foundation models are typically pre-trained to enhance performance on specific tasks and often struggle to generalize to unseen tasks without fine-tuning. To address this limitation, we propose augmenting TSFMs with In-Context Learning (ICL) capabilities, enabling them to perform test-time inference by dynamically adapting to input-output relationships provided within the context. Our framework, In-Context Time-series Pre-training (ICTP), restructures the original pre-training data to equip the backbone TSFM with ICL capabilities, enabling adaptation to unseen tasks. Experiments demonstrate that ICT improves the performance of state-of-the-art TSFMs by approximately 11.4% on unseen tasks without requiring fine-tuning.