Joint Embeddings Go Temporal

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing self-supervised time series representation learning methods—such as masked modeling—are vulnerable to input noise and confounding variables. To address this, we propose Time Series Joint Embedding Predictive Architecture (TS-JEPA), the first framework to adapt the JEPA paradigm to time series. TS-JEPA jointly optimizes future segment prediction and contrastive representation learning in a latent space, eliminating reliance on raw-input reconstruction and thereby significantly enhancing robustness. Its unified architecture natively supports both classification and forecasting tasks. Evaluated across multiple standard benchmarks, TS-JEPA achieves state-of-the-art or competitive performance, demonstrating strong generalization and balanced multi-task capability. This work establishes a novel paradigm for developing robust, general-purpose time series foundation models.

Technology Category

Application Category

📝 Abstract
Self-supervised learning has seen great success recently in unsupervised representation learning, enabling breakthroughs in natural language and image processing. However, these methods often rely on autoregressive and masked modeling, which aim to reproduce masked information in the input, which can be vulnerable to the presence of noise or confounding variables. To address this problem, Joint-Embedding Predictive Architectures (JEPA) has been introduced with the aim to perform self-supervised learning in the latent space. To leverage these advancements in the domain of time series, we introduce Time Series JEPA (TS-JEPA), an architecture specifically adapted for time series representation learning. We validate TS-JEPA on both classification and forecasting, showing that it can match or surpass current state-of-the-art baselines on different standard datasets. Notably, our approach demonstrates a strong performance balance across diverse tasks, indicating its potential as a robust foundation for learning general representations. Thus, this work lays the groundwork for developing future time series foundation models based on Joint Embedding.
Problem

Research questions and friction points this paper is trying to address.

Addresses limitations of autoregressive masked modeling in self-supervised learning
Proposes Joint-Embedding Predictive Architecture for time series representation learning
Aims to build robust foundation models for diverse time series tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

TS-JEPA adapts Joint-Embedding Predictive Architectures for time series
It performs self-supervised learning in latent space
It achieves robust performance across classification and forecasting tasks
🔎 Similar Papers
No similar papers found.