🤖 AI Summary
This work presents the first systematic investigation of catastrophic forgetting in time-series foundation models (TSFMs) under continual learning settings, where sequential fine-tuning across multiple tasks leads to significant performance degradation on previously learned tasks—revealing a critical robustness deficiency.
Method: We propose a knowledge retention quantification framework based on synthetically generated periodic data: we construct controllable periodic synthetic datasets and integrate them with zero-shot transfer and sequential fine-tuning paradigms to decouple and independently assess the stability–plasticity trade-off between new-task adaptation and old-knowledge retention.
Contribution/Results: Empirical evaluation demonstrates that while existing TSFMs improve performance on new tasks, they suffer severe forgetting across diverse benchmarks—exposing fundamental limitations in their continual learning capability. Our work establishes a reproducible evaluation benchmark and provides essential diagnostic tools to guide the robust evolution of TSFMs.
📝 Abstract
Time Series Foundation Models (TSFMs) have shown promising zero-shot generalization across diverse forecasting tasks. However, their robustness to continual adaptation remains underexplored. In this work, we investigate the extent to which TSFMs suffer from catastrophic forgetting when fine-tuned sequentially on multiple datasets. Using synthetic datasets designed with varying degrees of periodic structure, we measure the trade-off between adaptation to new data and retention of prior knowledge. Our experiments reveal that, while fine-tuning improves performance on new tasks, it often causes significant degradation on previously learned ones, illustrating a fundamental stability-plasticity dilemma.