TS-RAG: Retrieval-Augmented Generation based Time Series Foundation Models are Stronger Zero-Shot Forecaster

📅 2025-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing time-series foundation models (TSFMs) suffer from poor generalization and limited interpretability in zero-shot forecasting. To address this, we propose TS-RAG—the first retrieval-augmented generation framework tailored for zero-shot time-series forecasting. Our method leverages a pre-trained encoder to construct a zero-shot inference pipeline: (1) it introduces the RAG paradigm to time-series forecasting for the first time, enabling semantic retrieval of historically similar temporal patterns; (2) it incorporates a learnable Mixture-of-Experts (MoE) fusion module that dynamically selects relevant patterns and aligns cross-series semantics without fine-tuning. Evaluated on seven public benchmarks, TS-RAG achieves a new zero-shot state-of-the-art, outperforming prior TSFMs by an average of 6.51% in forecasting accuracy. Crucially, it simultaneously provides traceable, human-interpretable retrieval evidence—thereby bridging performance gains with model transparency.

Technology Category

Application Category

📝 Abstract
Recently, Large Language Models (LLMs) and Foundation Models (FMs) have become prevalent for time series forecasting tasks. However, fine-tuning large language models (LLMs) for forecasting enables the adaptation to specific domains but may not generalize well across diverse, unseen datasets. Meanwhile, existing time series foundation models (TSFMs) lack inherent mechanisms for domain adaptation and suffer from limited interpretability, making them suboptimal for zero-shot forecasting. To this end, we present TS-RAG, a retrieval-augmented generation based time series forecasting framework that enhances the generalization capability and interpretability of TSFMs. Specifically, TS-RAG leverages pre-trained time series encoders to retrieve semantically relevant time series segments from a dedicated knowledge database, incorporating contextual patterns for the given time series query. Next, we develop a learnable Mixture-of-Experts (MoE)-based augmentation module, which dynamically fuses retrieved time series patterns with the TSFM's representation of the input query, improving forecasting accuracy without requiring task-specific fine-tuning. Thorough empirical studies on seven public benchmark datasets demonstrate that TS-RAG achieves state-of-the-art zero-shot forecasting performance, outperforming TSFMs by up to 6.51% across diverse domains and showcasing desired interpretability.
Problem

Research questions and friction points this paper is trying to address.

Enhance generalization in time series forecasting.
Improve interpretability of time series models.
Achieve zero-shot forecasting without fine-tuning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieval-augmented generation for time series
Mixture-of-Experts module for dynamic fusion
Pre-trained encoders for semantic retrieval
🔎 Similar Papers
No similar papers found.
K
Kanghui Ning
School of Computing, University of Connecticut, Storrs, USA
Zijie Pan
Zijie Pan
Univeristy of Connecticut
Machine LearningDeep LearningGraph Neural NetworksTime Series
Y
Yu Liu
Ant Group, Hangzhou, China
Yushan Jiang
Yushan Jiang
University of Connecticut
Deep LearningData MiningTime SeriesExplainable AIMultimodal Learning
J
James Y. Zhang
Ant Group, Hangzhou, China
K
Kashif Rasul
Department of Machine Learning Research, Morgan Stanley, New York, USA
Anderson Schneider
Anderson Schneider
Morgan Stanley
Machine Learning
Lintao Ma
Lintao Ma
Ant Group
bayesian learningtime series analysisgenerative models
Y
Yuriy Nevmyvaka
Department of Machine Learning Research, Morgan Stanley, New York, USA
Dongjin Song
Dongjin Song
Associate Professor, School of Computing, University of Connecticut
Artificial IntelligenceMachine LearningData MiningTime SeriesGraph Learning