Setting $varepsilon$ is not the Issue in Differential Privacy

📅 2025-11-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
A common misconception in differential privacy (DP) practice holds that the difficulty of selecting the privacy parameter ε reflects an inherent flaw in DP, impeding its real-world adoption. Method: This paper refutes that view, arguing that the challenge stems from the intrinsic complexity of privacy risk modeling—not from deficiencies in the DP definition itself. It demonstrates through theoretical analysis and comparative evaluation of alternatives that non-DP approaches typically forfeit provable privacy guarantees. Contribution/Results: The work clarifies this conceptual misunderstanding, reaffirming DP’s irreplaceable role as a rigorous, foundational privacy framework. It asserts that any risk assessment method unable to be formalized within DP must explicitly justify its departure from the framework. By establishing DP as the essential benchmark for quantifiable privacy protection, the paper guides practitioners to reject ad hoc alternatives lacking formal privacy assurances.

Technology Category

Application Category

📝 Abstract
This position paper argues that setting the privacy budget in differential privacy should not be viewed as an important limitation of differential privacy compared to alternative methods for privacy-preserving machine learning. The so-called problem of interpreting the privacy budget is often presented as a major hindrance to the wider adoption of differential privacy in real-world deployments and is sometimes used to promote alternative mitigation techniques for data protection. We believe this misleads decision-makers into choosing unsafe methods. We argue that the difficulty in interpreting privacy budgets does not stem from the definition of differential privacy itself, but from the intrinsic difficulty of estimating privacy risks in context, a challenge that any rigorous method for privacy risk assessment face. Moreover, we claim that any sound method for estimating privacy risks should, given the current state of research, be expressible within the differential privacy framework or justify why it cannot.
Problem

Research questions and friction points this paper is trying to address.

Addresses misconceptions about privacy budget interpretation in differential privacy
Argues differential privacy is superior to alternative privacy-preserving machine learning methods
Claims privacy risk assessment challenges are intrinsic to all rigorous privacy methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differential privacy framework expresses privacy risk estimation
Interpreting privacy budget not a differential privacy limitation
Alternative methods should justify non-differential privacy approaches
🔎 Similar Papers
No similar papers found.