🤖 AI Summary
This paper investigates the design of optimal noise mechanisms for high-dimensional statistical release under approximate differential privacy ((ε,δ)-DP). For d-dimensional statistics with bounded ℓ₂ sensitivity, we propose and systematically analyze the ℓ₂ mechanism—i.e., adding noise uniformly sampled from an ℓ₂ ball. We theoretically establish that its mean squared error matches the Laplace mechanism in low dimensions (d = 1) and asymptotically attains the optimal bound of the Gaussian mechanism as d → ∞; this yields the first complete characterization of error scaling with dimensionality under (ε,δ)-DP. Experiments across multiple (ε,δ) settings demonstrate that the ℓ₂ mechanism significantly outperforms the Laplace mechanism and closely approaches the Gaussian mechanism’s performance in medium-to-high dimensions. Our work reveals the structural advantage of ℓ₂-norm noise in balancing privacy-utility trade-offs and establishes a new paradigm for designing high-dimensional private mechanisms.
📝 Abstract
We study the $ell_2$ mechanism for computing a $d$-dimensional statistic with bounded $ell_2$ sensitivity under approximate differential privacy. Across a range of privacy parameters, we find that the $ell_2$ mechanism obtains lower error than the Laplace and Gaussian mechanisms, matching the former at $d=1$ and approaching the latter as $d o infty$.