🤖 AI Summary
This work addresses the challenge of learning semantically rich and disentangled representations from multimodal tensor time series, whose structural complexity hinders effective modeling. To this end, the authors propose MoST, a novel method that first reduces structural complexity through tensor slicing and then jointly models modality-specific and modality-invariant features within a contrastive learning framework. MoST achieves, for the first time, disentangled learning of these two types of features in tensor time series and leverages the disentangled representations as a contrastive augmentation strategy. Extensive experiments demonstrate that MoST significantly outperforms state-of-the-art methods on multiple real-world datasets across both classification and forecasting tasks.
📝 Abstract
Multi-mode tensor time series (TTS) can be found in many domains, such as search engines and environmental monitoring systems. Learning representations of a TTS benefits various applications, but it is also challenging since the complexities inherent in the tensor hinder the realization of rich representations. In this paper, we propose a novel representation learning method designed specifically for TTS, namely MoST. Specifically, MoST uses a tensor slicing approach to reduce the complexity of the TTS structure and learns representations that can be disentangled into individual non-temporal modes. Each representation captures mode-specific features, which are the relationship between variables within the same mode, and mode-invariant features, which are in common in representations of different modes. We employ a contrastive learning framework to learn parameters; the loss function comprises two parts intended to learn representation in a mode-specific way and mode-invariant way, effectively exploiting disentangled representations as augmentations. Extensive experiments on real-world datasets show that MoST consistently outperforms the state-of-the-art methods in terms of classification and forecasting accuracy. Code is available at https://github.com/KoheiObata/MoST.