π€ AI Summary
Zero-shot time-series imputation lacks systematic evaluation of time-indexed foundation models (e.g., TabPFN-TS, MoTM).
Method: We conduct the first large-scale empirical study across 33 cross-domain datasets (~1.3 million imputation windows), comprehensively assessing their general-purpose zero-shot imputation capability without fine-tuning, and rigorously evaluating the impact of dynamic covariate integration during inference.
Contribution/Results: Both model families achieve high imputation accuracy across diverse missingness patterns and domains, significantly outperforming conventional methods. Integrating covariates at inference time further enhances robustness and generalization. Our findings establish time-indexed foundation models as a viable paradigm for universal zero-shot imputation, enabling lightweight, plug-and-play time-series reconstruction in real-world applications.
π Abstract
Foundation models for time series imputation remain largely unexplored. Recently, two such models, TabPFN-TS and MoTM, have emerged. These models share a common philosophy that places them within the family of time-indexed foundation models. This paper presents the first large-scale empirical study of these models for zero-shot imputation, which enables missing value recovery without retraining across a wide range of scenarios. We conduct extensive univariate experiments across 33 out-of-domain datasets (approximately 1.3M imputation windows) and evaluate their ability to integrate covariates at inference time to improve accuracy without fine-tuning. Our results demonstrate that time-indexed foundation models are a powerful and practical step toward achieving general-purpose, zero-shot imputation for real-world time series.