🤖 AI Summary
Structured time-series modeling suffers from poor generalization due to entanglement between underlying physical processes and systematic instrument-specific biases—especially in heterogeneous or multi-instrument settings. To address this, we propose a causal-driven dual-encoder foundation model—the first time-series foundation model to explicitly disentangle physical signals from instrument effects. Our method constructs structured contrastive learning objectives from observational triplets and employs latent-variable modeling to achieve separated representations of the two factors. Trained and validated on simulated astronomical time-series data (e.g., TESS variable stars), it significantly outperforms single-latent-space baselines. In downstream forecasting tasks, it enhances few-shot generalization and enables rapid cross-instrument adaptation. This work establishes a new paradigm for interpretable and robust time-series foundation models grounded in causal disentanglement.
📝 Abstract
Foundation models for structured time series data must contend with a fundamental challenge: observations often conflate the true underlying physical phenomena with systematic distortions introduced by measurement instruments. This entanglement limits model generalization, especially in heterogeneous or multi-instrument settings. We present a causally-motivated foundation model that explicitly disentangles physical and instrumental factors using a dual-encoder architecture trained with structured contrastive learning. Leveraging naturally occurring observational triplets (i.e., where the same target is measured under varying conditions, and distinct targets are measured under shared conditions) our model learns separate latent representations for the underlying physical signal and instrument effects. Evaluated on simulated astronomical time series designed to resemble the complexity of variable stars observed by missions like NASA's Transiting Exoplanet Survey Satellite (TESS), our method significantly outperforms traditional single-latent space foundation models on downstream prediction tasks, particularly in low-data regimes. These results demonstrate that our model supports key capabilities of foundation models, including few-shot generalization and efficient adaptation, and highlight the importance of encoding causal structure into representation learning for structured data.