π€ AI Summary
Existing foundation models for time series primarily focus on forecasting, with insufficient attention to cross-domain missing value imputation. Method: We propose the first general-purpose imputation framework designed for cross-domain generalization: it employs implicit neural representations (INRs) to model time series as continuous functions, enabling unified handling of diverse missing patterns and variable sampling rates; further, we introduce a Mixture-of-Time-Models (MoTM) mechanism that combines multiple independently trained INR base models with a context-adaptive ridge regressor for compositional generalization to unseen sequences. Contribution/Results: Extensive experiments under block-wise and point-wise missingness, as well as cross-domain settings, demonstrate significant improvements in out-of-distribution imputation robustness and generalization capability. Our approach establishes a novel paradigm for developing universal time series imputation models.
π Abstract
Recent years have witnessed a growing interest for time series foundation models, with a strong emphasis on the forecasting task. Yet, the crucial task of out-of-domain imputation of missing values remains largely underexplored. We propose a first step to fill this gap by leveraging implicit neural representations (INRs). INRs model time series as continuous functions and naturally handle various missing data scenarios and sampling rates. While they have shown strong performance within specific distributions, they struggle under distribution shifts. To address this, we introduce MoTM (Mixture of Timeflow Models), a step toward a foundation model for time series imputation. Building on the idea that a new time series is a mixture of previously seen patterns, MoTM combines a basis of INRs, each trained independently on a distinct family of time series, with a ridge regressor that adapts to the observed context at inference. We demonstrate robust in-domain and out-of-domain generalization across diverse imputation scenarios (e.g., block and pointwise missingness, variable sampling rates), paving the way for adaptable foundation imputation models.