🤖 AI Summary
Current privacy evaluation of synthetic tabular data lacks standardized benchmarks, hindering cross-study comparability and impeding comprehension by non-technical stakeholders.
Method: We propose the CAIR framework—grounded in four principles: Comparability, Applicability, Interpretability, and Representativeness—and formalize them into the first systematic, quantitative, general-purpose privacy assessment framework. We design a 16-dimensional CAIR scoring scale enabling multi-dimensional quantification, ranked comparison, and diagnostic identification of weaknesses. Leveraging a mixed qualitative–quantitative methodology and cross-metric empirical analysis, we evaluate the CAIR compliance of mainstream privacy metrics.
Contribution/Results: Our assessment reveals structural strengths and limitations of existing metrics, establishing CAIR as an actionable, consensus-oriented benchmark for both academia and industry to rigorously assess and compare synthetic-data privacy guarantees.
📝 Abstract
Data sharing is a necessity for innovative progress in many domains, especially in healthcare. However, the ability to share data is hindered by regulations protecting the privacy of natural persons. Synthetic tabular data provide a promising solution to address data sharing difficulties but does not inherently guarantee privacy. Still, there is a lack of agreement on appropriate methods for assessing the privacy-preserving capabilities of synthetic data, making it difficult to compare results across studies. To the best of our knowledge, this is the first work to identify properties that constitute good universal privacy evaluation metrics for synthetic tabular data. The goal of such metrics is to enable comparability across studies and to allow non-technical stakeholders to understand how privacy is protected. We identify four principles for the assessment of metrics: Comparability, Applicability, Interpretability, and Representativeness (CAIR). To quantify and rank the degree to which evaluation metrics conform to the CAIR principles, we design a rubric using a scale of 1-4. Each of the four properties is scored on four parameters, yielding 16 total dimensions. We study the applicability and usefulness of the CAIR principles and rubric by assessing a selection of metrics popular in other studies. The results provide granular insights into the strengths and weaknesses of existing metrics that not only rank the metrics but highlight areas of potential improvements. We expect that the CAIR principles will foster agreement among researchers and organizations on which universal privacy evaluation metrics are appropriate for synthetic tabular data.