How to DP-fy Your Data: A Practical Guide to Generating Synthetic Data With Differential Privacy

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the dual challenges of privacy leakage from real user interaction data in AI training and insufficient representativeness of public datasets. We propose an end-to-end differentially private synthetic data generation framework for multimodal data (images, tabular, and text). Methodologically, it integrates sensitive data preprocessing, adaptive privacy budget allocation, generative-model-driven synthesis, and empirical privacy auditing, supporting both centralized and decentralized deployment. Our key contributions are: (1) the first systematic formulation of a synthetic data generation paradigm that jointly ensures distributional fidelity and rigorous (ε,δ)-differential privacy; (2) substantially improved cross-modal applicability and deployment trustworthiness; and (3) empirical validation demonstrating high data utility—even under stringent privacy constraints (ε ≤ 2)—enabling secure data sharing and reuse as a robust alternative to conventional anonymization techniques.

Technology Category

Application Category

📝 Abstract
High quality data is needed to unlock the full potential of AI for end users. However finding new sources of such data is getting harder: most publicly-available human generated data will soon have been used. Additionally, publicly available data often is not representative of users of a particular system -- for example, a research speech dataset of contractors interacting with an AI assistant will likely be more homogeneous, well articulated and self-censored than real world commands that end users will issue. Therefore unlocking high-quality data grounded in real user interactions is of vital interest. However, the direct use of user data comes with significant privacy risks. Differential Privacy (DP) is a well established framework for reasoning about and limiting information leakage, and is a gold standard for protecting user privacy. The focus of this work, emph{Differentially Private Synthetic data}, refers to synthetic data that preserves the overall trends of source data,, while providing strong privacy guarantees to individuals that contributed to the source dataset. DP synthetic data can unlock the value of datasets that have previously been inaccessible due to privacy concerns and can replace the use of sensitive datasets that previously have only had rudimentary protections like ad-hoc rule-based anonymization. In this paper we explore the full suite of techniques surrounding DP synthetic data, the types of privacy protections they offer and the state-of-the-art for various modalities (image, tabular, text and decentralized). We outline all the components needed in a system that generates DP synthetic data, from sensitive data handling and preparation, to tracking the use and empirical privacy testing. We hope that work will result in increased adoption of DP synthetic data, spur additional research and increase trust in DP synthetic data approaches.
Problem

Research questions and friction points this paper is trying to address.

Generating synthetic data with differential privacy to protect user privacy.
Unlocking high-quality data from real user interactions while ensuring privacy.
Providing strong privacy guarantees for sensitive datasets across various modalities.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates synthetic data using differential privacy
Preserves overall trends while ensuring privacy
Covers multiple data modalities and system components
🔎 Similar Papers
No similar papers found.
Natalia Ponomareva
Natalia Ponomareva
Google, Oxford University
Synthetic DataLarge Language ModelsDifferential PrivacyTransfer learning
Z
Zheng Xu
Google Research, Mountain View, CA, USA
H
H. McMahan
Google Research, Seattle, WA, USA
P
P. Kairouz
Google Research, Seattle, WA, USA
L
Lucas Rosenblatt
NYU, New York, New York, USA
Vincent Cohen-Addad
Vincent Cohen-Addad
Google Research
AlgorithmsOptimizationClustering
C
Cristóbal Guzmán
Institute for Mathematical and Computational Engineering, Pontificia Universidad Católica de Chile
Ryan McKenna
Ryan McKenna
Research Scientist, Google
Differential PrivacyGraphical ModelsMachine LearningNumerical OptimizationFederated Analytics
G
Galen Andrew
Google Research, Seattle, WA, USA
Alex Bie
Alex Bie
University of Waterloo
Machine LearningDifferential Privacy
D
Da Yu
Google Research, Mountain View, CA, USA
A
Alex Kurakin
Google DeepMind, Mountain View, CA, USA
Morteza Zadimoghaddam
Morteza Zadimoghaddam
Research Scientist at Google
Scalable AlgorithmsSubmodularityCombinatorial OptimizationComputational Advertising
Sergei Vassilvitskii
Sergei Vassilvitskii
Google
Andreas Terzis
Andreas Terzis
Google Deepmind
Computer NetworksMachine LearningPrivacySecurity