🤖 AI Summary
This work proposes EIDOS, a novel time series foundation model that shifts the pretraining objective from direct observation-space forecasting to latent-space dynamics modeling. Existing approaches predict future values directly in the observation space, making them susceptible to noise and resulting in unstructured, inconsistent latent representations. In contrast, EIDOS employs a causal Transformer to forecast the evolution of latent representations, complemented by a lightweight aggregation branch to construct stable training targets. The model is trained via multi-task joint optimization, integrating latent alignment, observation anchoring, and direct prediction supervision to learn predictable and well-structured latent dynamics. Evaluated on the GIFT-Eval benchmark, EIDOS achieves state-of-the-art performance, significantly enhancing representation consistency and model robustness.
📝 Abstract
Most time series foundation models are pretrained by directly predicting future observations, which often yields weakly structured latent representations that capture surface noise rather than coherent and predictable temporal dynamics. In this work, we introduce EIDOS, a foundation model family that shifts pretraining from future value prediction to latent-space predictive learning. We train a causal Transformer to predict the evolution of latent representations, encouraging the emergence of structured and temporally coherent latent states. To ensure stable targets for latent-space learning, we design a lightweight aggregation branch to construct target representations. EIDOS is optimized via a joint objective that integrates latent-space alignment, observational grounding to anchor representations to the input signal, and direct forecasting supervision. On the GIFT-Eval benchmark, EIDOS mitigates structural fragmentation in the representation space and achieves state-of-the-art performance. These results demonstrate that constraining models to learn predictable latent dynamics is a principled step toward more robust and reliable time series foundation models.