🤖 AI Summary
Time-series classification is hindered by the scarcity of large-scale labeled time-series data for foundation model training. This paper proposes TiViT, a framework that converts time-series signals into images and leverages frozen visual Transformer (ViT) representations—bypassing reliance on extensive time-series annotations. We theoretically show that ViT’s 2D patching increases label-relevant token density and reduces sample complexity; empirically, we find its intermediate layers exhibit optimal intrinsic dimensionality and that ViT and time-series foundation model (TSFM) representations are highly complementary. TiViT integrates time-series image encoding, frozen OpenCLIP-ViT feature extraction, intrinsic dimension analysis, and multi-source representation fusion. It achieves state-of-the-art performance on standard time-series classification benchmarks. Further, joint fusion of ViT and TSFM features yields significant accuracy improvements.
📝 Abstract
Time series classification is a fundamental task in healthcare and industry, yet the development of time series foundation models (TSFMs) remains limited by the scarcity of publicly available time series datasets. In this work, we propose Time Vision Transformer (TiViT), a framework that converts time series into images to leverage the representational power of frozen Vision Transformers (ViTs) pretrained on large-scale image datasets. First, we theoretically motivate our approach by analyzing the 2D patching of ViTs for time series, showing that it can increase the number of label-relevant tokens and reduce the sample complexity. Second, we empirically demonstrate that TiViT achieves state-of-the-art performance on standard time series classification benchmarks by utilizing the hidden representations of large OpenCLIP models. We explore the structure of TiViT representations and find that intermediate layers with high intrinsic dimension are the most effective for time series classification. Finally, we assess the alignment between TiViT and TSFM representation spaces and identify a strong complementarity, with further performance gains achieved by combining their features. Our findings reveal yet another direction for reusing vision representations in a non-visual domain.