Time Series Forecasting with LLMs: Understanding and Enhancing Model Capabilities

๐Ÿ“… 2024-02-16
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 7
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study systematically investigates the capabilities and underlying mechanisms of large language models (LLMs) in zero-shot time series forecasting. It addresses their observed bias toward periodic/trended sequences and sharp performance degradation on irregular, non-stationary data. To tackle these limitations, the authors propose two novel methodological contributions: (1) the first empirical identification of LLMsโ€™ implicit periodicity detection capability, coupled with a cross-modal time-seriesโ€“text representation analysis framework; and (2) a knowledge-enhanced zero-shot prompting paradigm integrating natural-language rephrasing and external domain knowledge injection. Experiments demonstrate that while LLMs achieve accuracy comparable to classical statistical and deep learning baselines on strongly periodic series, their performance deteriorates significantly on non-periodic data. With knowledge augmentation, average forecasting error decreases by 23.6%. This work advances the understanding of LLMsโ€™ temporal reasoning capacity and establishes a reproducible, interpretable prompting framework for time series forecasting.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) have been applied in many fields and have developed rapidly in recent years. As a classic machine learning task, time series forecasting has recently been boosted by LLMs. Recent works treat large language models as emph{zero-shot} time series reasoners without further fine-tuning, which achieves remarkable performance. However, there are some unexplored research problems when applying LLMs for time series forecasting under the zero-shot setting. For instance, the LLMs' preferences for the input time series are less understood. In this paper, by comparing LLMs with traditional time series forecasting models, we observe many interesting properties of LLMs in the context of time series forecasting. First, our study shows that LLMs perform well in predicting time series with clear patterns and trends, but face challenges with datasets lacking periodicity. This observation can be explained by the ability of LLMs to recognize the underlying period within datasets, which is supported by our experiments. In addition, the input strategy is investigated, and it is found that incorporating external knowledge and adopting natural language paraphrases substantially improve the predictive performance of LLMs for time series. Overall, our study contributes insight into LLMs' advantages and limitations in time series forecasting under different conditions.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Temporal Prediction Tasks
Accuracy and Limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Time Prediction
External Information Integration
๐Ÿ”Ž Similar Papers
No similar papers found.