π€ AI Summary
This work addresses the limitation of existing approaches that overlook the multi-scale structural correspondence between natural language and time series when leveraging large language models (LLMs) for time series analysis. To overcome this, the authors propose a multi-scale hypergraph modeling framework that enhances multi-scale semantic representation of time series through hyperedge mechanisms. A cross-modal alignment module (CMA) is designed to align language and time series representations at multiple granularities, while a mixture-of-prompts (MoP) mechanism is introduced to improve the LLMβs capacity to comprehend complex temporal patterns. Extensive experiments across five task categories and 27 real-world datasets demonstrate that the proposed method significantly outperforms current state-of-the-art techniques.
π Abstract
Recently, there has been great success in leveraging pre-trained large language models (LLMs) for time series analysis. The core idea lies in effectively aligning the modality between natural language and time series. However, the multi-scale structures of natural language and time series have not been fully considered, resulting in insufficient utilization of LLMs capabilities. To this end, we propose MSH-LLM, a Multi-Scale Hypergraph method that aligns Large Language Models for time series analysis. Specifically, a hyperedging mechanism is designed to enhance the multi-scale semantic information of time series semantic space. Then, a cross-modality alignment (CMA) module is introduced to align the modality between natural language and time series at different scales. In addition, a mixture of prompts (MoP) mechanism is introduced to provide contextual information and enhance the ability of LLMs to understand the multi-scale temporal patterns of time series. Experimental results on 27 real-world datasets across 5 different applications demonstrate that MSH-LLM achieves the state-of-the-art results.