Methods for quantifying dataset similarity: a review, taxonomy and comparison

📅 2023-12-07
🏛️ Statistics Survey
📈 Citations: 7
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the critical need for quantifying dataset similarity in model generalization, transfer learning, simulation calibration, and two-sample testing. We systematically survey 118 similarity quantification methods and propose the first ten-dimensional classification framework, organizing approaches into seven technical categories: statistical distances (e.g., Wasserstein distance, Maximum Mean Discrepancy), kernel-based methods, information-theoretic measures, dimensionality-reduction embeddings, permutation tests, generative-model-based discriminators, and Gaussian process likelihood ratios. We develop a multi-dimensional evaluation system balancing theoretical guarantees, interpretability, and practical applicability, yielding a structured recommendation matrix aligned with task requirements and data characteristics. Furthermore, we introduce the first open-source, interactive tool for method selection—enabling real-time filtering and parameter configuration—to significantly enhance both selection efficiency and deployment suitability.
📝 Abstract
Quantifying the similarity between datasets has widespread applications in statistics and machine learning. The performance of a predictive model on novel datasets, referred to as generalizability, depends on how similar the training and evaluation datasets are. Exploiting or transferring insights between similar datasets is a key aspect of meta-learning and transfer-learning. In simulation studies, the similarity between distributions of simulated datasets and real datasets, for which the performance of methods is assessed, is crucial. In two- or $k$-sample testing, it is checked, whether the underlying distributions of two or more datasets coincide. Extremely many approaches for quantifying dataset similarity have been proposed in the literature. We examine more than 100 methods and provide a taxonomy, classifying them into ten classes. In an extensive review of these methods the main underlying ideas, formal definitions, and important properties are introduced. We compare the 118 methods in terms of their applicability, interpretability, and theoretical properties, in order to provide recommendations for selecting an appropriate dataset similarity measure based on the specific goal of the dataset comparison and on the properties of the datasets at hand. An online tool facilitates the choice of the appropriate dataset similarity measure.
Problem

Research questions and friction points this paper is trying to address.

Review and classify methods for quantifying dataset similarity
Compare 118 methods on applicability and interpretability
Provide recommendations for selecting dataset similarity measures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reviewing over 100 dataset similarity methods
Classifying methods into ten distinct categories
Providing online tool for measure selection
M
Marieke Stolte
Department of Statistics, TU Dortmund University
F
Franziska Kappenberg
Department of Statistics, TU Dortmund University
J
Jorg Rahnenfuhrer
Department of Statistics, TU Dortmund University
Andrea Bommert
Andrea Bommert
TU Dortmund University