Forecasting Time Series with LLMs via Patch-Based Prompting and Decomposition

๐Ÿ“… 2025-06-15
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the challenge of simultaneously achieving zero-shot accuracy, computational efficiency, and effective cross-series dependency modeling for large language models (LLMs) in time series forecasting, this paper proposes PatchInstructโ€”a fine-tuning-free, plug-and-play prompting method. Its core innovation lies in the first deep integration of sliding-patch tokenization with classical time series decomposition (trend, seasonality, and residual components), augmented by k-NN-based similar-series retrieval to enrich contextual information. PatchInstruct relies solely on prompt engineering, introducing no external modules or parameter updates. Evaluated on 32 real-world datasets, it substantially outperforms non-LLM baselines (e.g., N-BEATS, DLinear) and state-of-the-art LLM-based methods, achieving an average 18.7% reduction in MAE while maintaining sub-500ms inference latency per sample. The approach thus delivers high accuracy, low computational overhead, and strong generalization across diverse time series domains.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent advances in Large Language Models (LLMs) have demonstrated new possibilities for accurate and efficient time series analysis, but prior work often required heavy fine-tuning and/or ignored inter-series correlations. In this work, we explore simple and flexible prompt-based strategies that enable LLMs to perform time series forecasting without extensive retraining or the use of a complex external architecture. Through the exploration of specialized prompting methods that leverage time series decomposition, patch-based tokenization, and similarity-based neighbor augmentation, we find that it is possible to enhance LLM forecasting quality while maintaining simplicity and requiring minimal preprocessing of data. To this end, we propose our own method, PatchInstruct, which enables LLMs to make precise and effective predictions.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM time series forecasting without fine-tuning
Leveraging decomposition and patching for better predictions
Simplifying preprocessing while maintaining forecasting accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Patch-based tokenization for time series
Decomposition-enhanced prompting strategies
Similarity-based neighbor augmentation technique
๐Ÿ”Ž Similar Papers
No similar papers found.
M
Mayank Bumb
University of Massachusetts Amherst
A
Anshul Vemulapalli
University of Massachusetts Amherst
S
Sri Harsha Vardhan Prasad Jella
University of Massachusetts Amherst
A
Anish Gupta
University of Massachusetts Amherst
A
An La
University of Massachusetts Amherst
Ryan A. Rossi
Ryan A. Rossi
Adobe Research
Machine LearningPersonalizationGraph Representation LearningGraph MLGraph Theory
H
Hongjie Chen
Dolby Labs
Franck Dernoncourt
Franck Dernoncourt
NLP/ML Researcher. MIT PhD.
Machine LearningNeural NetworksNatural Language Processing
Nesreen K. Ahmed
Nesreen K. Ahmed
Senior Principal Scientist, Cisco AI Research, Intel Labs, Purdue University
Geometric Deep LearningGraph Representation LearningML for SystemsML4code
Y
Yu Wang
University of Oregon