A Survey of Time Series Foundation Models: Generalizing Time Series Representation with Large Language Model

📅 2024-05-03
🏛️ arXiv.org
📈 Citations: 34
Influential: 2
📄 PDF
🤖 AI Summary
The time-series foundation model (TSM) field suffers from a lack of comprehensive surveys, standardized evaluation protocols, and fragmented technical approaches. Method: This paper systematically reviews over 100 state-of-the-art works published between 2022 and 2024, proposing the first 3E analytical framework for TSMs—emphasizing Effectiveness, Efficiency, and Explainability—and establishing a unified taxonomy spanning application domains, resources, and methodologies. It innovatively categorizes TSM development into two paradigms: “training from scratch” and “large language model (LLM) adaptation,” and introduces a standardized cross-paradigm benchmarking protocol. A multidimensional evaluation suite is designed, integrating structured benchmark datasets, model zoos, and toolchains. Contribution/Results: All components are open-sourced via a GitHub full-stack repository, significantly enhancing reproducibility, comparability, and practical deployment efficiency of TSM research.

Technology Category

Application Category

📝 Abstract
Time series data are ubiquitous across various domains, making time series analysis critically important. Traditional time series models are task-specific, featuring singular functionality and limited generalization capacity. Recently, large language foundation models have unveiled their remarkable capabilities for cross-task transferability, zero-shot/few-shot learning, and decision-making explainability. This success has sparked interest in the exploration of foundation models to solve multiple time series challenges simultaneously. There are two main research lines, namely pre-training foundation models from scratch for time series and adapting large language foundation models for time series. They both contribute to the development of a unified model that is highly generalizable, versatile, and comprehensible for time series analysis. This survey offers a 3E analytical framework for comprehensive examination of related research. Specifically, we examine existing works from three dimensions, namely Effectiveness, Efficiency and Explainability. In each dimension, we focus on discussing how related works devise tailored solution by considering unique challenges in the realm of time series. Furthermore, we provide a domain taxonomy to help followers keep up with the domain-specific advancements. In addition, we introduce extensive resources to facilitate the field's development, including datasets, open-source, time series libraries. A GitHub repository is also maintained for resource updates (https://github.com/start2020/Awesome-TimeSeries-LLM-FM).
Problem

Research questions and friction points this paper is trying to address.

Surveying foundation models for time series analysis challenges
Analyzing modality-specific hurdles in adapting pre-trained models
Providing taxonomy and solutions for cross-modal time series tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Foundation models for time series analysis
Modality-aware challenge-oriented taxonomy
Pre-training on time series language vision
🔎 Similar Papers
No similar papers found.
J
Jiexia Ye
Hong Kong University of Science and Technology (Guangzhou), China
Weiqi Zhang
Weiqi Zhang
Tsinghua University
3D Computer VisionGenerative Model
Ke Yi
Ke Yi
HKUST
AlgorithmsDatabases
Y
Yongzi Yu
Hong Kong University of Science and Technology (Guangzhou), China
Ziyue Li
Ziyue Li
CS PhD, University of Maryland
Machine learning
J
Jia Li
Hong Kong University of Science and Technology, China
F
F. Tsung
Hong Kong University of Science and Technology, China