Differentially Private Synthetic Data via Foundation Model APIs 1: Images

📅 2023-05-24
🏛️ International Conference on Learning Representations
📈 Citations: 37
Influential: 11
📄 PDF
🤖 AI Summary
Addressing the challenge of differentially private (DP) synthetic image generation under foundational model API calls, this paper introduces the first training-free, purely API-driven DP image synthesis framework. Our core innovation is Private Evolution (PE), a novel optimization paradigm integrating differential privacy mechanisms, query-based black-box optimization, and gradient-free evolutionary search—enabling high-fidelity DP image generation directly via API calls to foundation models (e.g., Stable Diffusion) under stringent black-box constraints. On CIFAR-10, PE achieves FID ≤ 7.9 at an extremely low privacy budget ε = 0.67, reducing ε by 50× compared to prior SOTA (ε = 32). The method further scales to high-resolution, few-shot private datasets. To foster reproducibility and community advancement, our code and data are publicly released.
📝 Abstract
Generating differentially private (DP) synthetic data that closely resembles the original private data is a scalable way to mitigate privacy concerns in the current data-driven world. In contrast to current practices that train customized models for this task, we aim to generate DP Synthetic Data via APIs (DPSDA), where we treat foundation models as blackboxes and only utilize their inference APIs. Such API-based, training-free approaches are easier to deploy as exemplified by the recent surge in the number of API-based apps. These approaches can also leverage the power of large foundation models which are only accessible via their inference APIs. However, this comes with greater challenges due to strictly more restrictive model access and the need to protect privacy from the API provider. In this paper, we present a new framework called Private Evolution (PE) to solve this problem and show its initial promise on synthetic images. Surprisingly, PE can match or even outperform state-of-the-art (SOTA) methods without any model training. For example, on CIFAR10 (with ImageNet as the public data), we achieve FID<= 7.9 with privacy cost {epsilon} = 0.67, significantly improving the previous SOTA from {epsilon} = 32. We further demonstrate the promise of applying PE on large foundation models such as Stable Diffusion to tackle challenging private datasets with a small number of high-resolution images. The code and data are released at https://github.com/microsoft/DPSDA.
Problem

Research questions and friction points this paper is trying to address.

Generate DP synthetic data via foundation model APIs
Protect privacy without training custom models
Improve synthetic data quality with large foundation models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses foundation model APIs for synthetic data
Training-free approach ensures easy deployment
Achieves high privacy with low epsilon cost
🔎 Similar Papers
No similar papers found.