🤖 AI Summary
This paper addresses the open problem posed by Ghazi et al. on efficiently sampling from an unknown-parameter, unbounded Gaussian distribution under (ε,δ)-differential privacy.
Method: We propose a novel framework integrating privacy amplification by iteration, robust mean and covariance estimation, and adaptive truncation—specifically designed to mitigate the privacy-accuracy trade-off inherent in unbounded data.
Contribution/Results: Our algorithm is the first to achieve private Gaussian sampling with only Õ(d) samples, attaining optimal sample complexity. It improves upon the prior best result by a factor of Ω(d), thereby fully resolving the open problem. Theoretical analysis establishes strictly tighter error bounds and stronger privacy guarantees than all existing methods, with rigorous (ε,δ)-DP compliance throughout.
📝 Abstract
We provide the first $widetilde{mathcal{O}}left(d
ight)$-sample algorithm for sampling from unbounded Gaussian distributions under the constraint of $left(varepsilon, delta
ight)$-differential privacy. This is a quadratic improvement over previous results for the same problem, settling an open question of Ghazi, Hu, Kumar, and Manurangsi.