đ€ AI Summary
Current privacy assessments of synthetic data lack standardized, quantifiable metricsâparticularly for identity, membership, and attribute disclosure risks. Method: We introduce the first expert-consensusâdriven privacy measurement framework, employing the Delphi method to systematically define the conceptual boundaries of these three risk categories; we empirically expose the fundamental inadequacy of prevailing similarity-based metrics in privacy evaluation and formally refute the interpretability of non-negligible differential privacy budgets. Contribution/Results: The framework yields an actionable, taxonomy-aware privacy metric recommendation checklist, explicitly identifying critical research gaps. It establishes a rigorous, reproducible, and fine-grained theoretical foundation and practical toolkit for regulatory-compliant privacy assessment of synthetic data.
đ Abstract
Synthetic data generation is one approach for sharing individual-level data. However, to meet legislative requirements, it is necessary to demonstrate that the individuals' privacy is adequately protected. There is no consolidated standard for measuring privacy in synthetic data. Through an expert panel and consensus process, we developed a framework for evaluating privacy in synthetic data. Our findings indicate that current similarity metrics fail to measure identity disclosure, and their use is discouraged. For differentially private synthetic data, a privacy budget other than close to zero was not considered interpretable. There was consensus on the importance of membership and attribute disclosure, both of which involve inferring personal information about an individual without necessarily revealing their identity. The resultant framework provides precise recommendations for metrics that address these types of disclosures effectively. Our findings further present specific opportunities for future research that can help with widespread adoption of synthetic data.