ADAPTive Input Training for Many-to-One Pre-Training on Time-Series Classification

πŸ“… 2026-04-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing time series pretraining methods struggle to generalize effectively across multiple datasets due to discrepancies in input length and channel dimensions. This work proposes ADAPT, a novel pretraining paradigm that enables unified modeling across 162 time series classification datasets by adaptively aligning the physical attributes of time series data. Integrating self-supervised learning with a hybrid batch training strategy, ADAPT overcomes the generalization limitations inherent in conventional many-to-one pretraining approaches. The method achieves state-of-the-art performance on multiple benchmarks, establishing a foundational framework for developing general-purpose foundation models for time series analysis.
πŸ“ Abstract
Recent work on time-series models has leveraged self-supervised training to learn meaningful features and patterns in order to improve performance on downstream tasks and generalize to unseen modalities. While these pretraining methods have shown great promise in one-to-many scenarios, where a model is pre-trained on one dataset and fine-tuned on a downstream dataset, they have struggled to generalize to new datasets when more datasets are added during pre-training. This is a fundamental challenge in building foundation models for time-series data, as it limits the ability to develop models that can learn from a large variety of diverse datasets available. To address this challenge, we present a new pre-training paradigm for time-series data called ADAPT, which can efficiently align the physical properties of data in the time-series domain, enabling mixed-batch pre-training despite the extreme discrepancies in the input sizes and channel dimensions of pre-training data. We trained on 162 time-series classification datasets and set new state-of-the-art performance for classification benchmarks. We successfully train a model within the time-series domain on a wide range of datasets simultaneously, which is a major building block for building generalist foundation models in time-series domains.
Problem

Research questions and friction points this paper is trying to address.

time-series classification
pre-training
foundation models
generalization
many-to-one
Innovation

Methods, ideas, or system contributions that make the work stand out.

ADAPT
time-series pre-training
many-to-one learning
foundation models
mixed-batch training
πŸ”Ž Similar Papers
No similar papers found.
P
Paul Quinlan
Electrical and Computer Engineering, Queen’s University
Q
Qingguo Li
Mechanical and Materials Engineering, Queen’s University
Xiaodan Zhu
Xiaodan Zhu
ECE & Ingenuity Labs Research Institute, Queen's University, Canada
Natural language processingmachine learningartificial intelligence