🤖 AI Summary
This study addresses the dual challenges of privacy leakage from real user interaction data in AI training and insufficient representativeness of public datasets. We propose an end-to-end differentially private synthetic data generation framework for multimodal data (images, tabular, and text). Methodologically, it integrates sensitive data preprocessing, adaptive privacy budget allocation, generative-model-driven synthesis, and empirical privacy auditing, supporting both centralized and decentralized deployment. Our key contributions are: (1) the first systematic formulation of a synthetic data generation paradigm that jointly ensures distributional fidelity and rigorous (ε,δ)-differential privacy; (2) substantially improved cross-modal applicability and deployment trustworthiness; and (3) empirical validation demonstrating high data utility—even under stringent privacy constraints (ε ≤ 2)—enabling secure data sharing and reuse as a robust alternative to conventional anonymization techniques.
📝 Abstract
High quality data is needed to unlock the full potential of AI for end users. However finding new sources of such data is getting harder: most publicly-available human generated data will soon have been used. Additionally, publicly available data often is not representative of users of a particular system -- for example, a research speech dataset of contractors interacting with an AI assistant will likely be more homogeneous, well articulated and self-censored than real world commands that end users will issue. Therefore unlocking high-quality data grounded in real user interactions is of vital interest. However, the direct use of user data comes with significant privacy risks. Differential Privacy (DP) is a well established framework for reasoning about and limiting information leakage, and is a gold standard for protecting user privacy. The focus of this work, emph{Differentially Private Synthetic data}, refers to synthetic data that preserves the overall trends of source data,, while providing strong privacy guarantees to individuals that contributed to the source dataset. DP synthetic data can unlock the value of datasets that have previously been inaccessible due to privacy concerns and can replace the use of sensitive datasets that previously have only had rudimentary protections like ad-hoc rule-based anonymization. In this paper we explore the full suite of techniques surrounding DP synthetic data, the types of privacy protections they offer and the state-of-the-art for various modalities (image, tabular, text and decentralized). We outline all the components needed in a system that generates DP synthetic data, from sensitive data handling and preparation, to tracking the use and empirical privacy testing. We hope that work will result in increased adoption of DP synthetic data, spur additional research and increase trust in DP synthetic data approaches.