🤖 AI Summary
Scientific time series data are often sparse, heterogeneous, and limited in scale, posing significant challenges for unified representation learning. To address this, this work proposes a cross-domain knowledge distillation framework that systematically integrates complementary knowledge from foundation models across multiple scientific domains to construct a universal encoder. The approach introduces an adaptive chunking strategy to handle variable sequence lengths and incorporates a statistical compensation mechanism to mitigate discrepancies arising from numerical scale variations. Extensive experiments across seven scientific time series tasks demonstrate that the proposed method substantially enhances model generalization and transferability, establishing a new paradigm for representation learning in scientific data.
📝 Abstract
Scientific time series are central to scientific AI but are typically sparse, highly heterogeneous, and limited in scale, making unified representation learning particularly challenging. Meanwhile, foundation models pretrained on relevant time series domains such as audio, general time series, and brain signals contain rich knowledge, but their applicability to scientific signals remains underexplored. In this paper, we investigate the transferability and complementarity of foundation models from relevant time series domains, and study how to effectively leverage them to build a unified encoder for scientific time series. We first systematically evaluate relevant foundation models, showing the effectiveness of knowledge transfer to scientific tasks and their complementary strengths. Based on this observation, we propose STEP, a Scientific Time Series Encoder Pretraining framework via cross domain distillation. STEP introduces adaptive patching to handle extreme-length sequences and a statistics compensation scheme to accommodate diverse numerical scales. It further leverages cross-domain distillation to integrate knowledge from multiple foundation models into a unified encoder. By combining complementary representations across different domains, STEP learns general-purpose and transferable features tailored for scientific signals. Experiments on seven scientific time series tasks demonstrate that STEP provides both an effective structure and an effective pretraining paradigm, taking a STEP toward scientific time series representation learning.