🤖 AI Summary
To address the excessive computational overhead of large language models (LLMs) in time series classification, this paper proposes a lightweight and efficient paradigm: time series are first converted into descriptive text, then encoded directly using a frozen pre-trained text embedding model (e.g., Sentence-BERT), eliminating the need for LLM fine-tuning. The resulting text embeddings are fed into a lightweight 1D CNN followed by an MLP classifier head for final prediction. Our key contribution is the first successful adaptation of frozen text encoders to the time series domain, enabling end-to-end, parameter-free time series encoding. On standard benchmarks, our method achieves higher accuracy than current state-of-the-art approaches while using only 14.5% of their parameters—significantly improving the efficiency–accuracy trade-off. This establishes a novel, resource-efficient paradigm for time series understanding, particularly suitable for deployment in computationally constrained environments.
📝 Abstract
Recent advancements in language modeling have shown promising results when applied to time series data. In particular, fine-tuning pre-trained large language models (LLMs) for time series classification tasks has achieved state-of-the-art (SOTA) performance on standard benchmarks. However, these LLM-based models have a significant drawback due to the large model size, with the number of trainable parameters in the millions. In this paper, we propose an alternative approach to leveraging the success of language modeling in the time series domain. Instead of fine-tuning LLMs, we utilize a text embedding model to embed time series and then pair the embeddings with a simple classification head composed of convolutional neural networks (CNN) and multilayer perceptron (MLP). We conducted extensive experiments on a well-established time series classification benchmark. We demonstrated LETS-C not only outperforms the current SOTA in classification accuracy but also offers a lightweight solution, using only 14.5% of the trainable parameters on average compared to the SOTA model. Our findings suggest that leveraging text embedding models to encode time series data, combined with a simple yet effective classification head, offers a promising direction for achieving high-performance time series classification while maintaining a lightweight model architecture.