🤖 AI Summary
Traditional differential privacy (DP) research adopts a “privacy-first” paradigm, whereas real-world applications often require “utility-first” design—minimizing privacy cost while guaranteeing a prescribed utility target. Existing approaches are restricted to Laplace/Gaussian mechanisms, lack support for general sequences of private estimators, and incur additional privacy overhead from hyperparameter tuning.
Method: We generalize ex-post DP to arbitrary sequences of private estimators and propose an adaptive hyperparameter tuning scheme that incurs no extra privacy cost. We further extend this framework to Rényi DP.
Contribution/Results: Our method enables optimal privacy budget allocation under flexible utility constraints. Empirically, it achieves up to 50% reduction in privacy cost while strictly satisfying the specified utility requirement—significantly enhancing the practicality and applicability of the privacy–utility trade-off.
📝 Abstract
The conventional approach in differential privacy (DP) literature formulates the privacy-utility trade-off with a "privacy-first" perspective: for a predetermined level of privacy, a certain utility is achievable. However, practitioners often operate under a "utility-first" paradigm, prioritizing a desired level of utility and then determining the corresponding privacy cost.
Wu et al. [2019] initiated a formal study of this "utility-first" perspective by introducing ex-post DP. They demonstrated that by adding correlated Laplace noise and progressively reducing it on demand, a sequence of increasingly accurate estimates of a private parameter can be generated, with the privacy cost attributed only to the least noisy iterate released. This led to a Laplace mechanism variant that achieves a specified utility with minimal privacy loss. However, their work, and similar findings by Whitehouse et al. [2022], are primarily limited to simple mechanisms based on Laplace or Gaussian noise.
In this paper, we significantly generalize these results. In particular, we extend the work of Wu et al. [2019] and Liu and Talwar [2019] to support any sequence of private estimators, incurring at most a doubling of the original privacy budget. Furthermore, we demonstrate that hyperparameter tuning for these estimators, including the selection of an optimal privacy budget, can be performed without additional privacy cost. Finally, we extend our results to ex-post Renyi DP, further broadening the applicability of utility-first privacy mechanisms.