๐ค AI Summary
To address the challenge of simultaneously achieving zero-shot accuracy, computational efficiency, and effective cross-series dependency modeling for large language models (LLMs) in time series forecasting, this paper proposes PatchInstructโa fine-tuning-free, plug-and-play prompting method. Its core innovation lies in the first deep integration of sliding-patch tokenization with classical time series decomposition (trend, seasonality, and residual components), augmented by k-NN-based similar-series retrieval to enrich contextual information. PatchInstruct relies solely on prompt engineering, introducing no external modules or parameter updates. Evaluated on 32 real-world datasets, it substantially outperforms non-LLM baselines (e.g., N-BEATS, DLinear) and state-of-the-art LLM-based methods, achieving an average 18.7% reduction in MAE while maintaining sub-500ms inference latency per sample. The approach thus delivers high accuracy, low computational overhead, and strong generalization across diverse time series domains.
๐ Abstract
Recent advances in Large Language Models (LLMs) have demonstrated new possibilities for accurate and efficient time series analysis, but prior work often required heavy fine-tuning and/or ignored inter-series correlations. In this work, we explore simple and flexible prompt-based strategies that enable LLMs to perform time series forecasting without extensive retraining or the use of a complex external architecture. Through the exploration of specialized prompting methods that leverage time series decomposition, patch-based tokenization, and similarity-based neighbor augmentation, we find that it is possible to enhance LLM forecasting quality while maintaining simplicity and requiring minimal preprocessing of data. To this end, we propose our own method, PatchInstruct, which enables LLMs to make precise and effective predictions.