π€ AI Summary
Existing pre-trained language model (PLM)-based time series forecasting (TSF) methods underutilize PLMsβ sequential modeling capacity, resulting in suboptimal prediction accuracy. To address this, we propose CC-Timeβthe first framework to deeply integrate PLMs with dedicated time-series models via cross-model and cross-modal collaborative learning. Specifically, CC-Time jointly encodes time series data and their natural-language descriptions, and introduces an adaptive cross-model fusion module that dynamically integrates complementary knowledge from both model families. It synergistically combines pre-trained language understanding, cross-modal attention, and channel-aware feature fusion to jointly capture multi-granular temporal dependencies and inter-variable relationships. Evaluated on nine real-world datasets, CC-Time consistently outperforms state-of-the-art methods under both full-data and few-shot settings, demonstrating the effectiveness and generalizability of multimodal collaborative modeling for TSF.
π Abstract
With the success of pre-trained language models (PLMs) in various application fields beyond natural language processing, language models have raised emerging attention in the field of time series forecasting (TSF) and have shown great prospects. However, current PLM-based TSF methods still fail to achieve satisfactory prediction accuracy matching the strong sequential modeling power of language models. To address this issue, we propose Cross-Model and Cross-Modality Learning with PLMs for time series forecasting (CC-Time). We explore the potential of PLMs for time series forecasting from two aspects: 1) what time series features could be modeled by PLMs, and 2) whether relying solely on PLMs is sufficient for building time series models. In the first aspect, CC-Time incorporates cross-modality learning to model temporal dependency and channel correlations in the language model from both time series sequences and their corresponding text descriptions. In the second aspect, CC-Time further proposes the cross-model fusion block to adaptively integrate knowledge from the PLMs and time series model to form a more comprehensive modeling of time series patterns. Extensive experiments on nine real-world datasets demonstrate that CC-Time achieves state-of-the-art prediction accuracy in both full-data training and few-shot learning situations.