🤖 AI Summary
A common misconception in differential privacy (DP) practice holds that the difficulty of selecting the privacy parameter ε reflects an inherent flaw in DP, impeding its real-world adoption. Method: This paper refutes that view, arguing that the challenge stems from the intrinsic complexity of privacy risk modeling—not from deficiencies in the DP definition itself. It demonstrates through theoretical analysis and comparative evaluation of alternatives that non-DP approaches typically forfeit provable privacy guarantees. Contribution/Results: The work clarifies this conceptual misunderstanding, reaffirming DP’s irreplaceable role as a rigorous, foundational privacy framework. It asserts that any risk assessment method unable to be formalized within DP must explicitly justify its departure from the framework. By establishing DP as the essential benchmark for quantifiable privacy protection, the paper guides practitioners to reject ad hoc alternatives lacking formal privacy assurances.
📝 Abstract
This position paper argues that setting the privacy budget in differential privacy should not be viewed as an important limitation of differential privacy compared to alternative methods for privacy-preserving machine learning. The so-called problem of interpreting the privacy budget is often presented as a major hindrance to the wider adoption of differential privacy in real-world deployments and is sometimes used to promote alternative mitigation techniques for data protection. We believe this misleads decision-makers into choosing unsafe methods. We argue that the difficulty in interpreting privacy budgets does not stem from the definition of differential privacy itself, but from the intrinsic difficulty of estimating privacy risks in context, a challenge that any rigorous method for privacy risk assessment face. Moreover, we claim that any sound method for estimating privacy risks should, given the current state of research, be expressible within the differential privacy framework or justify why it cannot.