🤖 AI Summary
Large language models (LLMs) struggle to directly model continuous time series, and existing textualization methods lack semantic interpretability. Method: We propose a multi-level decomposition-driven time-series–language alignment framework. First, the time series is decomposed into trend, seasonal, and residual components. Then, a component-specific textual reprogramming mechanism maps each component into the token space of a pre-trained LLM, enabling semantically interpretable cross-modal representation alignment. Contribution/Results: Our framework introduces the first “decompose–align–fuse” three-stage textualization paradigm, achieving high-accuracy forecasting and clear, component-level interpretability without modifying the LLM architecture. Extensive experiments on multiple benchmark datasets demonstrate significant improvements over state-of-the-art methods, validating both effectiveness and generalizability.
📝 Abstract
The adaptation of large language models (LLMs) to time series forecasting poses unique challenges, as time series data is continuous in nature, while LLMs operate on discrete tokens. Despite the success of LLMs in natural language processing (NLP) and other structured domains, aligning time series data with language-based representations while maintaining both predictive accuracy and interpretability remains a significant hurdle. Existing methods have attempted to reprogram time series data into text-based forms, but these often fall short in delivering meaningful, interpretable results. In this paper, we propose a multi-level text alignment framework for time series forecasting using LLMs that not only improves prediction accuracy but also enhances the interpretability of time series representations. Our method decomposes time series into trend, seasonal, and residual components, which are then reprogrammed into component-specific text representations. We introduce a multi-level alignment mechanism, where component-specific embeddings are aligned with pre-trained word tokens, enabling more interpretable forecasts. Experiments on multiple datasets demonstrate that our method outperforms state-of-the-art models in accuracy while providing good interpretability.