🤖 AI Summary
This work addresses the fundamental challenge of converting $(varepsilon,delta)$-approximate differential privacy (DP) mechanisms into $(varepsilon,0)$-pure DP while preserving utility. We propose the first systematic “purification” framework: by applying randomized post-processing and calibrated noise injection to approximate DP outputs, we provably strengthen privacy—eliminating $delta > 0$—without altering the original mechanism’s structure. The method is theoretically tight, computationally efficient, and maintains high practical utility. We demonstrate its effectiveness for the first time in pure DP empirical risk minimization (DP-ERM), Propose-Test-Release (PTR), and differentially private query release, achieving utility close to optimal approximate DP baselines. Our core contribution lies in overcoming the intrinsic limitation imposed by $delta > 0$, thereby enabling strong, pure DP guarantees without sacrificing competitive statistical performance.
📝 Abstract
We propose a framework to convert $(varepsilon, delta)$-approximate Differential Privacy (DP) mechanisms into $(varepsilon, 0)$-pure DP mechanisms, a process we call ``purification''. This algorithmic technique leverages randomized post-processing with calibrated noise to eliminate the $delta$ parameter while preserving utility. By combining the tighter utility bounds and computational efficiency of approximate DP mechanisms with the stronger guarantees of pure DP, our approach achieves the best of both worlds. We illustrate the applicability of this framework in various settings, including Differentially Private Empirical Risk Minimization (DP-ERM), data-dependent DP mechanisms such as Propose-Test-Release (PTR), and query release tasks. To the best of our knowledge, this is the first work to provide a systematic method for transforming approximate DP into pure DP while maintaining competitive accuracy and computational efficiency.