Adapt Data to Model: Adaptive Transformation Optimization for Domain-shared Time Series Foundation Models

📅 2026-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of balancing prediction accuracy and generalization in large time series models when applied to diverse, non-stationary data. To this end, the authors propose TATO, a data-centric adaptive transformation optimization framework that enables efficient cross-domain adaptation without fine-tuning pretrained models. TATO leverages three key transformations—contextual slicing, scale normalization, and outlier correction—combined with time series augmentation and a two-stage ranking strategy. Experimental results demonstrate that TATO reduces mean squared error (MSE) by 13.6% on average across multiple benchmark datasets, with improvements reaching up to 65.4%. Moreover, the entire optimization process typically completes within two minutes, offering both high performance and practical deployment efficiency.

Technology Category

Application Category

📝 Abstract
Large time series models (LTMs) have emerged as powerful tools for universal forecasting, yet they often struggle with the inherent diversity and nonstationarity of real-world time series data, leading to an unsatisfactory trade-off between forecasting accuracy and generalization. Rather than continually finetuning new LTM instances for each domain, we propose a data-centric framework, time-series adaptive transformation optimization (TATO), that enables a single frozen pre-trained LTM to adapt to diverse downstream domains through an optimally configured transformation pipeline. Specifically, TATO constructs three representative types of transformations, including context slicing, scale normalization, and outlier correction, to help LTMs better align with target domain characteristics. To ensure robustness, we incorporate carefully selected time series augmentations and a two-stage ranking mechanism that filters out pipelines underperforming on specific metrics. Extensive experiments on state-of-the-art LTMs and widely used datasets demonstrate that TATO consistently and significantly improves domain-adaptive forecasting performance, achieving a maximum reduction in MSE of 65.4\% and an average reduction of 13.6\%. Moreover, TATO is highly efficient, typically completing optimization in under 2 minutes, making it practical for real-world deployment. The source code is available at https://github.com/thulab/TATO.
Problem

Research questions and friction points this paper is trying to address.

time series forecasting
domain adaptation
nonstationarity
generalization
large time series models
Innovation

Methods, ideas, or system contributions that make the work stand out.

adaptive transformation
time series foundation model
data-centric adaptation
domain adaptation
forecasting optimization