🤖 AI Summary
In decentralized data settings, differentially private synthetic data generation suffers from a privacy–utility trade-off—particularly for clients with small local datasets, where high noise injection severely degrades utility. To address this, we propose an enhanced federated DP-CDA framework. Its core innovation is the CAPE protocol, enabling cross-client correlated-noise coordination such that noise partially cancels during server-side aggregation, thereby simultaneously preserving individual privacy guarantees and improving synthetic data quality. The framework integrates class-center-aware random mixing, controllable Gaussian noise injection, and rigorous (ε, δ)-differential privacy analysis. Experiments on MNIST and FashionMNIST demonstrate that our method achieves synthetic data utility approaching centralized non-federated baselines while satisfying (ε, δ)-DP—substantially outperforming standard federated DP synthetic data approaches.
📝 Abstract
In this work, we explore differentially private synthetic data generation in a decentralized-data setting by building on the recently proposed Differentially Private Class-Centric Data Aggregation (DP-CDA). DP-CDA synthesizes data in a centralized setting by mixing multiple randomly-selected samples from the same class and injecting carefully calibrated Gaussian noise, ensuring (ε, δ)-differential privacy. When deployed in a decentralized or federated setting, where each client holds only a small partition of the data, DP-CDA faces new challenges. The limited sample size per client increases the sensitivity of local computations, requiring higher noise injection to maintain the differential privacy guarantee. This, in turn, leads to a noticeable degradation in the utility compared to the centralized setting. To mitigate this issue, we integrate the Correlation-Assisted Private Estimation (CAPE) protocol into the federated DP-CDA framework and propose CAPE Assisted Federated DP-CDA algorithm. CAPE enables limited collaboration among the clients by allowing them to generate jointly distributed (anti-correlated) noise that cancels out in aggregate, while preserving privacy at the individual level. This technique significantly improves the privacy-utility trade-off in the federated setting. Extensive experiments on MNIST and FashionMNIST datasets demonstrate that the proposed CAPE Assisted Federated DP-CDA approach can achieve utility comparable to its centralized counterpart under some parameter regime, while maintaining rigorous differential privacy guarantees.