🤖 AI Summary
This paper identifies a fundamental violation of end-to-end differential privacy (DP) guarantees in tabular synthetic data generation: direct extraction of feature domains from private source data inherently breaks DP compliance and substantially amplifies membership inference attack (MIA) risk. Method: We propose the first DP-compliant domain extraction paradigm for tabular data, integrating domain modeling into the privacy-preserving pipeline. Our approach incorporates boundary sensitivity analysis and a customized DP mechanism to enforce strict privacy control over the entire domain extraction process. Contribution/Results: Experiments demonstrate that our method reduces state-of-the-art MIA success rates by over 60% under high privacy budgets (ε ≥ 2), significantly enhancing both the deployment security and practical utility of synthetic tabular data in real-world applications.
📝 Abstract
Privacy attacks, particularly membership inference attacks (MIAs), are widely used to assess the privacy of generative models for tabular synthetic data, including those with Differential Privacy (DP) guarantees. These attacks often exploit outliers, which are especially vulnerable due to their position at the boundaries of the data domain (e.g., at the minimum and maximum values). However, the role of data domain extraction in generative models and its impact on privacy attacks have been overlooked. In this paper, we examine three strategies for defining the data domain: assuming it is externally provided (ideally from public data), extracting it directly from the input data, and extracting it with DP mechanisms. While common in popular implementations and libraries, we show that the second approach breaks end-to-end DP guarantees and leaves models vulnerable. While using a provided domain (if representative) is preferable, extracting it with DP can also defend against popular MIAs, even at high privacy budgets.