🤖 AI Summary
This work addresses the challenge of disentangling architectural innovation from data engineering effects in time series foundation models, a problem exacerbated by inconsistent training protocols across existing studies. By establishing a standardized training protocol, the authors systematically evaluate the zero-shot forecasting performance of a generic Patch Transformer and conduct comprehensive ablation studies on model scaling, data composition, and training strategies. Their findings demonstrate that a standard Transformer architecture alone achieves strong scalability and state-of-the-art performance, highlighting key drivers behind high predictive accuracy. The study further contributes open-source models and full experimental details, offering the community a transparent and reproducible strong baseline for future research.
📝 Abstract
The recent surge in Time Series Foundation Models has rapidly advanced the field, yet the heterogeneous training setups across studies make it difficult to attribute improvements to architectural innovations versus data engineering. In this work, we investigate the potential of a standard patch Transformer, demonstrating that this generic architecture achieves state-of-the-art zero-shot forecasting performance using a straightforward training protocol. We conduct a comprehensive ablation study that covers model scaling, data composition, and training techniques to isolate the essential ingredients for high performance. Our findings identify the key drivers of performance, while confirming that the generic architecture itself demonstrates excellent scalability. By strictly controlling these variables, we provide comprehensive empirical results on model scaling across multiple dimensions. We release our open-source model and detailed findings to establish a transparent, reproducible baseline for future research.