🤖 AI Summary
This paper addresses the challenge of quantifying similarity among time-series datasets. We propose a novel distribution-based similarity measure: each dataset is modeled as a multivariate Gaussian distribution, and their dissimilarity is captured via the Wasserstein distance, augmented with empirical covariance estimation and distributional geometric analysis. To our knowledge, this is the first work to apply the Wasserstein distance at the dataset level for time-series similarity modeling. The method effectively captures distributional shifts and supports downstream tasks including model selection, fine-tuning, and visualization. Experiments demonstrate that the proposed metric exhibits strong correlation (Pearson *r* > 0.60) with inference loss of base models under out-of-distribution and transfer learning settings. Consequently, it significantly improves the efficiency of model evaluation and enhances predictive capability for generalization performance.
📝 Abstract
The emergence of time-series foundation model research elevates the growing need to measure the (dis)similarity of time-series datasets. A time-series dataset similarity measure aids research in multiple ways, including model selection, finetuning, and visualization. In this paper, we propose a distribution-based method to measure time-series dataset similarity by leveraging the Wasserstein distance. We consider a time-series dataset an empirical instantiation of an underlying multivariate normal distribution (MVN). The similarity between two time-series datasets is thus computed as the Wasserstein distance between their corresponding MVNs. Comprehensive experiments and visualization show the effectiveness of our approach. Specifically, we show how the Wasserstein distance helps identify similar time-series datasets and facilitates inference performance estimation of foundation models in both out-of-distribution and transfer learning evaluation, with high correlations between our proposed measure and the inference loss (>0.60).