🤖 AI Summary
Predefined time-domain augmentations in temporal contrastive learning often misalign with semantic structures, introducing noise and degrading representation quality.
Method: This paper introduces the first lightweight, plug-and-play frequency-domain augmentation paradigm. Leveraging FFT/DFT, it performs semantic-aware spectral decomposition to adaptively distinguish critical from non-critical frequency components: identity operations preserve semantics on critical bands, while semantically agnostic perturbations are injected into non-critical bands. We theoretically prove that the generated views strictly preserve semantic content.
Contribution/Results: The paradigm exhibits three key properties—globality, independence, and compactness—and integrates seamlessly into frameworks such as SimCLR and TS-TCC. Extensive experiments on UCR/UEA benchmarks and five large-scale real-world datasets demonstrate consistent superiority over ten state-of-the-art methods across time-series classification, anomaly detection, and cross-task transfer, significantly improving both accuracy and generalization.
📝 Abstract
Contrastive learning has emerged as a competent approach for unsupervised representation learning. However, the design of an optimal augmentation strategy, although crucial for contrastive learning, is less explored for time series classification tasks. Existing predefined time-domain augmentation methods are primarily adopted from vision and are not specific to time series data. Consequently, this cross-modality incompatibility may distort the semantically relevant information of time series by introducing mismatched patterns into the data. To address this limitation, we present a novel perspective from the frequency domain and identify three advantages for downstream classification: global, independent, and compact. To fully utilize the three properties, we propose the lightweight yet effective Frequency Refined Augmentation (FreRA) tailored for time series contrastive learning on classification tasks, which can be seamlessly integrated with contrastive learning frameworks in a plug-and-play manner. Specifically, FreRA automatically separates critical and unimportant frequency components. Accordingly, we propose semantic-aware Identity Modification and semantic-agnostic Self-adaptive Modification to protect semantically relevant information in the critical frequency components and infuse variance into the unimportant ones respectively. Theoretically, we prove that FreRA generates semantic-preserving views. Empirically, we conduct extensive experiments on two benchmark datasets, including UCR and UEA archives, as well as five large-scale datasets on diverse applications. FreRA consistently outperforms ten leading baselines on time series classification, anomaly detection, and transfer learning tasks, demonstrating superior capabilities in contrastive representation learning and generalization in transfer learning scenarios across diverse datasets.