Learning to Factorize and Adapt: A Versatile Approach Toward Universal Spatio-Temporal Foundation Models

📅 2026-01-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key challenges in spatiotemporal foundation models, including poor cross-dataset generalization, high computational costs of joint pretraining, and spatial heterogeneity. To overcome these limitations, we propose FactoST-v2, a novel factorized disentanglement architecture that decouples universal temporal learning from domain-specific spatial adaptation. This design enables full weight transferability and arbitrary-length generalization. The temporal encoder is pretrained using random sequence masking, while a lightweight spatial adapter is constructed via meta-adaptive learning and prompt-based mechanisms. Extensive experiments demonstrate that FactoST-v2 achieves state-of-the-art accuracy across diverse domains, significantly outperforms existing foundation models in zero-shot and few-shot forecasting scenarios, and attains linearly superior inference efficiency—matching the performance of specialized expert models.

Technology Category

Application Category

📝 Abstract
Spatio-Temporal (ST) Foundation Models (STFMs) promise cross-dataset generalization, yet joint ST pretraining is computationally expensive and grapples with the heterogeneity of domain-specific spatial patterns. Substantially extending our preliminary conference version, we present FactoST-v2, an enhanced factorized framework redesigned for full weight transfer and arbitrary-length generalization. FactoST-v2 decouples universal temporal learning from domain-specific spatial adaptation. The first stage pretrains a minimalist encoder-only backbone using randomized sequence masking to capture invariant temporal dynamics, enabling probabilistic quantile prediction across variable horizons. The second stage employs a streamlined adapter to rapidly inject spatial awareness via meta adaptive learning and prompting. Comprehensive evaluations across diverse domains demonstrate that FactoST-v2 achieves state-of-the-art accuracy with linear efficiency - significantly outperforming existing foundation models in zero-shot and few-shot scenarios while rivaling domain-specific expert baselines. This factorized paradigm offers a practical, scalable path toward truly universal STFMs. Code is available at https://github.com/CityMind-Lab/FactoST.
Problem

Research questions and friction points this paper is trying to address.

Spatio-Temporal Foundation Models
cross-dataset generalization
domain heterogeneity
computational efficiency
universal modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

factorized framework
spatio-temporal foundation model
temporal pretraining
spatial adaptation
meta adaptive learning
🔎 Similar Papers
No similar papers found.
Siru Zhong
Siru Zhong
PhD student, Hong Kong University of Science and Technology (Guangzhou)
Spatio-Temporal Data MiningFoundation ModelsTime Series
J
Junjie Qiu
The Hong Kong University of Science and Technology (Guangzhou)
Y
Yangyu Wu
The Hong Kong University of Science and Technology (Guangzhou)
Y
Yiqiu Liu
The Hong Kong University of Science and Technology (Guangzhou)
Y
Yuanpeng He
Peking University
Zhongwen Rao
Zhongwen Rao
Noah's Ark Lab, Huawei
Time SeriesSpatial-Temporal
B
Bin Yang
East China Normal University
Chenjuan Guo
Chenjuan Guo
Professor, East China Normal University
Data AnalyticsMachine Learning
H
Hao Xu
Huawei 2012 Laboratories
Yuxuan Liang
Yuxuan Liang
Assistant Professor, Hong Kong University of Science and Technology (Guangzhou)
Spatio-Temporal Data MiningUrban ComputingUrban AIFoundation ModelsTime Series