🤖 AI Summary
To address weak cross-domain generalization and unimodal representation limitations in multi-source few-shot time-series classification (TSC), this paper proposes a pretraining-finetuning framework. Methodologically, it introduces a novel two-level prototype contrastive learning mechanism and, for the first time, incorporates time-series–image cross-modal contrastive learning—leveraging time-series-to-image transformations such as Gramian Angular Field—to overcome representational bottlenecks inherent in unimodal data augmentation. The framework jointly optimizes multi-source time-series augmentations and cross-modal contrastive loss. Experiments demonstrate significant improvements in few-shot and cross-domain classification accuracy across multiple downstream TSC benchmarks. Moreover, the framework enables efficient fine-tuning and robust few-shot learning, substantially enhancing the domain adaptability and generalization capability of pretrained models.
📝 Abstract
Time series classification (TSC) is an important task in time series analysis. Existing TSC methods mainly train on each single domain separately, suffering from a degradation in accuracy when the samples for training are insufficient in certain domains. The pre-training and fine-tuning paradigm provides a promising direction for solving this problem. However, time series from different domains are substantially divergent, which challenges the effective pre-training on multi-source data and the generalization ability of pre-trained models. To handle this issue, we introduce Augmented Series and Image Contrastive Learning for Time Series Classification (AimTS), a pre-training framework that learns generalizable representations from multi-source time series data. We propose a two-level prototype-based contrastive learning method to effectively utilize various augmentations in multi-source pre-training, which learns representations for TSC that can be generalized to different domains. In addition, considering augmentations within the single time series modality are insufficient to fully address classification problems with distribution shift, we introduce the image modality to supplement structural information and establish a series-image contrastive learning to improve the generalization of the learned representations for TSC tasks. Extensive experiments show that after multi-source pre-training, AimTS achieves good generalization performance, enabling efficient learning and even few-shot learning on various downstream TSC datasets.