🤖 AI Summary
Current time-series foundation models (TSFMs) face two key bottlenecks in zero-shot forecasting: (1) fixed-granularity tokenization fails to adapt to dynamically varying information density across sequences; and (2) generic positional encodings inadequately capture diverse periodicities and trends. To address these, we propose an adaptive TSFM framework featuring: (1) a dynamic tokenization module that automatically adjusts block size based on local information density; (2) instance-adaptive positional encoding, which learns sequence-specific temporal structure representations; and (3) a multi-scale block prediction pretraining paradigm. Pretrained on the large-scale PreSTS dataset, our model achieves state-of-the-art performance on GIFT-Eval and Time-Series-Library benchmarks—outperforming prior methods with significantly fewer parameters. It delivers substantial gains in zero-shot forecasting accuracy and, for the first time, enables personalized, scale-adaptive modeling of heterogeneous time series.
📝 Abstract
Time series foundation models (TSFMs) have emerged as a powerful paradigm for time series analysis, driven by large-scale pretraining on diverse data corpora. However, time series inherently exhibit heterogeneous information density over time, influenced by system states and signal complexity, presenting significant modeling challenges especially in a zero-shot scenario. Current TSFMs rely on non-adaptive processing pipelines that fail to capture this dynamic nature. For example, common tokenization strategies such as fixed-size patching enforce rigid observational granularity, limiting their ability to adapt to varying information densities. Similarly, conventional positional encodings impose a uniform temporal scale, making it difficult to model diverse periodicities and trends across series. To overcome these limitations, we propose Kairos, a flexible TSFM framework that integrates a dynamic patching tokenizer and an instance-adaptive positional embedding. Kairos adaptively selects tokenization granularity and tailors positional encodings to the unique characteristics of each time series instance. Trained on a large-scale Predictability-Stratified Time Series (PreSTS) corpus comprising over 300 billion time points and adopting a multi-patch prediction strategy in the inference stage, Kairos achieves superior performance with much fewer parameters on two common zero-shot benchmarks, GIFT-Eval and the Time-Series-Library benchmark, consistently outperforming established methods across diverse tasks. The project page is at https://foundation-model-research.github.io/Kairos .