🤖 AI Summary
This work addresses the problem of releasing sparse histograms under pure differential privacy (DP) in high-dimensional domains, aiming to break the Ω(n²) deterministic time-complexity barrier inherent in prior algorithms. We propose the first efficient algorithm that simultaneously achieves pure DP and optimal ℓ∞ estimation error. Our method leverages private-item blanket—a novel privatization primitive—combined with stability analysis and target-length padding to upgrade approximate-DP histogram mechanisms to pure-DP guarantees. The algorithm runs in O(n log log d) time in the word-RAM model, improving significantly over the previous best Õ(n²) bound. When n ≪ d, its ℓ∞ error matches the information-theoretic lower bound. This resolves an open problem posed by Balcer and Vadhan (2019), establishing for the first time a subquadratic deterministic runtime together with statistically optimal accuracy.
📝 Abstract
We introduce an algorithm that releases a pure differentially private sparse histogram over $n$ participants drawn from a domain of size $d gg n$. Our method attains the optimal $ell_infty$-estimation error and runs in strictly $O(n ln ln d)$ time in the word-RAM model, thereby improving upon the previous best known deterministic-time bound of $ ilde{O}(n^2)$ and resolving the open problem of breaking this quadratic barrier (Balcer and Vadhan, 2019). Central to our algorithm is a novel private item blanket technique with target-length padding, which transforms the approximate differentially private stability-based histogram algorithm into a pure differentially private one.