Univariate to Multivariate: LLMs as Zero-Shot Predictors for Time-Series Forecasting

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited zero-shot capability of large language models (LLMs) in modeling noisy, multivariate time series. We propose a lightweight, fine-tuning-free forecasting framework that bridges this gap. Methodologically, we introduce the first integration of time-series decomposition—into trend, seasonality, and residuals—with structured textual encoding, coupled with a lightweight, zero-shot–oriented prompting strategy that transforms multivariate sequences into LLM-compatible natural language inputs. The framework is specifically designed for small-parameter LLMs (e.g., Llama 2/3.2, GPT-4o-mini, DeepSeek 7B), overcoming their inherent limitations in directly processing raw time-series data. It achieves state-of-the-art performance across multiple multivariate benchmarks. Ablation studies confirm the critical contribution of each component to robustness and accuracy. Overall, our approach establishes a novel paradigm for enabling compact LLMs to perform high-noise, multivariate zero-shot forecasting without parameter updates.

Technology Category

Application Category

📝 Abstract
Time-series prediction or forecasting is critical across many real-world dynamic systems, and recent studies have proposed using Large Language Models (LLMs) for this task due to their strong generalization capabilities and ability to perform well without extensive pre-training. However, their effectiveness in handling complex, noisy, and multivariate time-series data remains underexplored. To address this, we propose LLMPred which enhances LLM-based time-series prediction by converting time-series sequences into text and feeding them to LLMs for zero shot prediction along with two main data pre-processing techniques. First, we apply time-series sequence decomposition to facilitate accurate prediction on complex and noisy univariate sequences. Second, we extend this univariate prediction capability to multivariate data using a lightweight prompt-processing strategy. Extensive experiments with smaller LLMs such as Llama 2 7B, Llama 3.2 3B, GPT-4o-mini, and DeepSeek 7B demonstrate that LLMPred achieves competitive or superior performance compared to state-of-the-art baselines. Additionally, a thorough ablation study highlights the importance of the key components proposed in LLMPred.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM-based time-series prediction for complex noisy data
Extending univariate forecasting to multivariate using lightweight prompts
Evaluating LLMPred's performance against state-of-the-art baselines
Innovation

Methods, ideas, or system contributions that make the work stand out.

Converts time-series sequences into text
Applies time-series sequence decomposition
Uses lightweight prompt-processing for multivariate data
🔎 Similar Papers
No similar papers found.
Chamara Madarasingha
Chamara Madarasingha
Curtin University
Multimedia streamingML learningCyber security
N
N. Sohrabi
School of Information Technologies, Deakin University, Australia
Z
Zahir Tari
School of Computing Technologies, RMIT University, Australia