Timer-S1: A Billion-Scale Time Series Foundation Model with Serial Scaling

πŸ“… 2026-03-05
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the longstanding trade-off between long-sequence modeling capacity and inference efficiency in time series foundation models. We propose a serial scaling paradigm that jointly scales model architecture, dataset, and training protocol, yielding TimeMoEβ€”a 8.3-billion-parameter sparse mixture-of-experts model. TimeMoE integrates a universal TimeSTP module and a serial token prediction objective, significantly enhancing long-horizon forecasting performance while mitigating error accumulation from iterative rolling predictions. To support large-scale training, we curate TimeBench, a trillion-scale, high-quality, and unbiased time series dataset, and introduce a post-training strategy to simultaneously boost short-term accuracy and long-context reasoning. Evaluated on GIFT-Eval, our model achieves state-of-the-art results among pretrained models, setting new benchmarks in both MASE and CRPS metrics.

Technology Category

Application Category

πŸ“ Abstract
We introduce Timer-S1, a strong Mixture-of-Experts (MoE) time series foundation model with 8.3B total parameters, 0.75B activated parameters for each token, and a context length of 11.5K. To overcome the scalability bottleneck in existing pre-trained time series foundation models, we perform Serial Scaling in three dimensions: model architecture, dataset, and training pipeline. Timer-S1 integrates sparse TimeMoE blocks and generic TimeSTP blocks for Serial-Token Prediction (STP), a generic training objective that adheres to the serial nature of forecasting. The proposed paradigm introduces serial computations to improve long-term predictions while avoiding costly rolling-style inference and pronounced error accumulation in the standard next-token prediction. Pursuing a high-quality and unbiased training dataset, we curate TimeBench, a corpus with one trillion time points, and apply meticulous data augmentation to mitigate predictive bias. We further pioneer a post-training stage, including continued pre-training and long-context extension, to enhance short-term and long-context performance. Evaluated on the large-scale GIFT-Eval leaderboard, Timer-S1 achieves state-of-the-art forecasting performance, attaining the best MASE and CRPS scores as a pre-trained model. Timer-S1 will be released to facilitate further research.
Problem

Research questions and friction points this paper is trying to address.

time series foundation model
scalability bottleneck
long-term forecasting
serial prediction
large-scale time series
Innovation

Methods, ideas, or system contributions that make the work stand out.

Serial Scaling
Mixture-of-Experts
Serial-Token Prediction
TimeBench
Long-context Forecasting