🤖 AI Summary
Existing time-series foundation models (TSFMs) suffer from non-modular, non-reproducible adaptation pipelines that rely on ad-hoc, model-specific implementations. Method: We propose a lightweight, open-source TSFM toolkit featuring standardized backbone interfaces and pluggable abstractions for encoders, decoders, and adapters—enabling a highly modular fine-tuning framework. This design supports flexible cross-model and multi-task composition, significantly improving pipeline reusability, maintainability, and development efficiency. Results: Experiments demonstrate that high-performance TSFM pipelines can be constructed with only ~7 lines of code while maintaining state-of-the-art performance across long-term forecasting, anomaly detection, and other downstream tasks. Our core contribution is the first unified, decoupled componentization paradigm for TSFMs—establishing foundational infrastructure to support industrial-scale adaptation of time-series foundation models.
📝 Abstract
Foundation models (FMs) have opened new avenues for machine learning applications due to their ability to adapt to new and unseen tasks with minimal or no further training. Time-series foundation models (TSFMs) -- FMs trained on time-series data -- have shown strong performance on classification, regression, and imputation tasks. Recent pipelines combine TSFMs with task-specific encoders, decoders, and adapters to improve performance; however, assembling such pipelines typically requires ad hoc, model-specific implementations that hinder modularity and reproducibility. We introduce FMTK, an open-source, lightweight and extensible toolkit for constructing and fine-tuning TSFM pipelines via standardized backbone and component abstractions. FMTK enables flexible composition across models and tasks, achieving correctness and performance with an average of seven lines of code. https://github.com/umassos/FMTK