🤖 AI Summary
Existing dataset similarity measures suffer from high computational cost, narrow applicability, sensitivity to data attributes and hyperparameters, and insufficient global robustness. To address these limitations, this paper proposes two novel similarity metrics specifically designed for synthetic data quality assessment and feature selection validation. We introduce the first holistic dataset similarity framework that simultaneously guarantees theoretical soundness, computational efficiency, and parameter robustness. Our approach jointly models probability distances and kernel embeddings by integrating the Maximum Mean Discrepancy (MMD) with geometric consistency constraints—requiring no distributional assumptions and supporting arbitrary-dimensional and heterogeneous data. Evaluated on 12 benchmark datasets, our method achieves an average 37.2% improvement in correlation accuracy over state-of-the-art methods. Moreover, it effectively guides synthetic data generation and feature subset selection.
📝 Abstract
Measuring inter-dataset similarity is an important task in machine learning and data mining with various use cases and applications. Existing methods for measuring inter-dataset similarity are computationally expensive, limited, or sensitive to different entities and non-trivial choices for parameters. They also lack a holistic perspective on the entire dataset. In this paper, we propose two novel metrics for measuring inter-dataset similarity. We discuss the mathematical foundation and the theoretical basis of our proposed metrics. We demonstrate the effectiveness of the proposed metrics by investigating two applications in the evaluation of synthetic data and in the evaluation of feature selection methods. The theoretical and empirical studies conducted in this paper illustrate the effectiveness of the proposed metrics.