Shapelets-Enriched Selective Forecasting using Time Series Foundation Models

📅 2026-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the unreliability of time series foundation models in critical regions, which hinders their practical deployment. We propose the first selective forecasting framework that integrates shapelets to enhance prediction trustworthiness. Specifically, shapelets are extracted from the target-domain validation set via translation-invariant dictionary learning, and predictions are automatically flagged as unreliable based on their distance-based similarity to these shapelets, thereby improving model transparency and reliability without requiring additional annotations. Evaluated across multiple benchmark datasets, our method achieves substantial performance gains: it reduces average prediction error by 22.17% under zero-shot settings and by 22.62% with full fine-tuning, significantly outperforming random selection strategies.

Technology Category

Application Category

📝 Abstract
Time series foundation models have recently gained a lot of attention due to their ability to model complex time series data encompassing different domains including traffic, energy, and weather. Although they exhibit strong average zero-shot performance on forecasting tasks, their predictions on certain critical regions of the data are not always reliable, limiting their usability in real-world applications, especially when data exhibits unique trends. In this paper, we propose a selective forecasting framework to identify these critical segments of time series using shapelets. We learn shapelets using shift-invariant dictionary learning on the validation split of the target domain dataset. Utilizing distance-based similarity to these shapelets, we facilitate the user to selectively discard unreliable predictions and be informed of the model's realistic capabilities. Empirical results on diverse benchmark time series datasets demonstrate that our approach leveraging both zero-shot and full-shot fine-tuned models reduces the overall error by an average of 22.17% for zero-shot and 22.62% for full-shot fine-tuned model. Furthermore, our approach using zero-shot and full-shot fine-tuned models, also outperforms its random selection counterparts by up to 21.41% and 21.43% on one of the datasets.
Problem

Research questions and friction points this paper is trying to address.

time series foundation models
selective forecasting
unreliable predictions
critical regions
shapelets
Innovation

Methods, ideas, or system contributions that make the work stand out.

shapelets
selective forecasting
time series foundation models
shift-invariant dictionary learning
zero-shot forecasting
🔎 Similar Papers
No similar papers found.
S
Shivani Tomar
Trinity College Dublin
S
Seshu Tirupathi
IBM Research, Dublin
E
Elizabeth Daly
IBM Research, Dublin
Ivana Dusparic
Ivana Dusparic
Professor in Computer Science, Trinity College Dublin
reinforcement learningself-adaptive systemsmulti-agent systemsintelligent mobility