🤖 AI Summary
Existing time-series foundation models (TSFMs) suffer from poor generalization and limited interpretability in zero-shot forecasting. To address this, we propose TS-RAG—the first retrieval-augmented generation framework tailored for zero-shot time-series forecasting. Our method leverages a pre-trained encoder to construct a zero-shot inference pipeline: (1) it introduces the RAG paradigm to time-series forecasting for the first time, enabling semantic retrieval of historically similar temporal patterns; (2) it incorporates a learnable Mixture-of-Experts (MoE) fusion module that dynamically selects relevant patterns and aligns cross-series semantics without fine-tuning. Evaluated on seven public benchmarks, TS-RAG achieves a new zero-shot state-of-the-art, outperforming prior TSFMs by an average of 6.51% in forecasting accuracy. Crucially, it simultaneously provides traceable, human-interpretable retrieval evidence—thereby bridging performance gains with model transparency.
📝 Abstract
Recently, Large Language Models (LLMs) and Foundation Models (FMs) have become prevalent for time series forecasting tasks. However, fine-tuning large language models (LLMs) for forecasting enables the adaptation to specific domains but may not generalize well across diverse, unseen datasets. Meanwhile, existing time series foundation models (TSFMs) lack inherent mechanisms for domain adaptation and suffer from limited interpretability, making them suboptimal for zero-shot forecasting. To this end, we present TS-RAG, a retrieval-augmented generation based time series forecasting framework that enhances the generalization capability and interpretability of TSFMs. Specifically, TS-RAG leverages pre-trained time series encoders to retrieve semantically relevant time series segments from a dedicated knowledge database, incorporating contextual patterns for the given time series query. Next, we develop a learnable Mixture-of-Experts (MoE)-based augmentation module, which dynamically fuses retrieved time series patterns with the TSFM's representation of the input query, improving forecasting accuracy without requiring task-specific fine-tuning. Thorough empirical studies on seven public benchmark datasets demonstrate that TS-RAG achieves state-of-the-art zero-shot forecasting performance, outperforming TSFMs by up to 6.51% across diverse domains and showcasing desired interpretability.