EIDOS: Latent-Space Predictive Learning for Time Series Foundation Models

📅 2026-02-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes EIDOS, a novel time series foundation model that shifts the pretraining objective from direct observation-space forecasting to latent-space dynamics modeling. Existing approaches predict future values directly in the observation space, making them susceptible to noise and resulting in unstructured, inconsistent latent representations. In contrast, EIDOS employs a causal Transformer to forecast the evolution of latent representations, complemented by a lightweight aggregation branch to construct stable training targets. The model is trained via multi-task joint optimization, integrating latent alignment, observation anchoring, and direct prediction supervision to learn predictable and well-structured latent dynamics. Evaluated on the GIFT-Eval benchmark, EIDOS achieves state-of-the-art performance, significantly enhancing representation consistency and model robustness.

Technology Category

Application Category

📝 Abstract
Most time series foundation models are pretrained by directly predicting future observations, which often yields weakly structured latent representations that capture surface noise rather than coherent and predictable temporal dynamics. In this work, we introduce EIDOS, a foundation model family that shifts pretraining from future value prediction to latent-space predictive learning. We train a causal Transformer to predict the evolution of latent representations, encouraging the emergence of structured and temporally coherent latent states. To ensure stable targets for latent-space learning, we design a lightweight aggregation branch to construct target representations. EIDOS is optimized via a joint objective that integrates latent-space alignment, observational grounding to anchor representations to the input signal, and direct forecasting supervision. On the GIFT-Eval benchmark, EIDOS mitigates structural fragmentation in the representation space and achieves state-of-the-art performance. These results demonstrate that constraining models to learn predictable latent dynamics is a principled step toward more robust and reliable time series foundation models.
Problem

Research questions and friction points this paper is trying to address.

time series foundation models
latent representations
predictive learning
temporal dynamics
representation structure
Innovation

Methods, ideas, or system contributions that make the work stand out.

latent-space predictive learning
time series foundation models
structured latent representations
causal Transformer
predictable dynamics
🔎 Similar Papers
No similar papers found.
X
Xinxing Zhou
Nankai University
Q
Qingren Yao
Eindhoven University of Technology
Y
Yiji Zhao
Yunnan University
C
Chenghao Liu
DataDog
Flora Salim
Flora Salim
Professor, CSE, UNSW
Machine LearningTime SeriesSpatiotemporalUbiCompFoundation Models
X
Xiaojie Yuan
Nankai University
Y
Yanlong Wen
Nankai University
Ming Jin
Ming Jin
Assistant Professor, School of ICT, Griffith University
Machine LearningTime SeriesGraph Data MiningMultimodal Learning