Rethinking Anonymity Claims in Synthetic Data Generation: A Model-Centric Privacy Attack Perspective

๐Ÿ“… 2026-01-30
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Current evaluations of synthetic data anonymity are largely confined to the dataset level, overlooking the privacy risks inherent in the generative models themselves and thus failing to meet real-world regulatory compliance requirements. This work addresses this gap by adopting a model-centric perspective, establishing a novel privacy risk assessment framework that explicitly links the GDPR notion of โ€œidentifiabilityโ€ with privacy attacks targeting generative models. Through a comparative analysis of differential privacy (DP) and similarity-based privacy metrics (SBPMs), the study demonstrates that SBPMs are insufficient in mitigating identifiability risks, whereas DP provides stronger, more reliable guarantees. The findings offer a more robust and responsible technical foundation for evaluating privacy and ensuring regulatory compliance in synthetic data systems.

Technology Category

Application Category

๐Ÿ“ Abstract
Training generative machine learning models to produce synthetic tabular data has become a popular approach for enhancing privacy in data sharing. As this typically involves processing sensitive personal information, releasing either the trained model or generated synthetic datasets can still pose privacy risks. Yet, recent research, commercial deployments, and privacy regulations like the General Data Protection Regulation (GDPR) largely assess anonymity at the level of an individual dataset. In this paper, we rethink anonymity claims about synthetic data from a model-centric perspective and argue that meaningful assessments must account for the capabilities and properties of the underlying generative model and be grounded in state-of-the-art privacy attacks. This perspective better reflects real-world products and deployments, where trained models are often readily accessible for interaction or querying. We interpret the GDPR's definitions of personal data and anonymization under such access assumptions to identify the types of identifiability risks that must be mitigated and map them to privacy attacks across different threat settings. We then argue that synthetic data techniques alone do not ensure sufficient anonymization. Finally, we compare the two mechanisms most commonly used alongside synthetic data -- Differential Privacy (DP) and Similarity-based Privacy Metrics (SBPMs) -- and argue that while DP can offer robust protections against identifiability risks, SBPMs lack adequate safeguards. Overall, our work connects regulatory notions of identifiability with model-centric privacy attacks, enabling more responsible and trustworthy regulatory assessment of synthetic data systems by researchers, practitioners, and policymakers.
Problem

Research questions and friction points this paper is trying to address.

synthetic data
anonymity
generative models
privacy attacks
GDPR
Innovation

Methods, ideas, or system contributions that make the work stand out.

model-centric privacy
synthetic data
differential privacy
privacy attacks
anonymization
๐Ÿ”Ž Similar Papers
No similar papers found.