How Well Can Differential Privacy Be Audited in One Run?

📅 2025-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the fundamental precision limits of differential privacy (DP) auditing from a single run: can the true privacy parameter ε be exactly recovered solely from one execution’s output? The authors characterize the maximal achievable audit efficacy under single-run settings, proving that perfect ε-recovery is possible if and only if the algorithm satisfies “individual influence isolability”—i.e., the contribution of any single data point to the output is uniquely identifiable and separable. Methodologically, the analysis integrates formal DP definition reasoning, information-theoretic lower bound derivation, and structural modeling of algorithms, extending Steinke et al.’s multi-instance intervention-based auditing framework. The core contribution is the first rigorous characterization of intrinsic accuracy limits for single-run DP auditing, accompanied by a decidable, structure-based criterion for ε-recoverability. This establishes theoretical feasibility boundaries and provides practical assessment guidelines for auditing DP compliance in real-world machine learning systems.

Technology Category

Application Category

📝 Abstract
Recent methods for auditing the privacy of machine learning algorithms have improved computational efficiency by simultaneously intervening on multiple training examples in a single training run. Steinke et al. (2024) prove that one-run auditing indeed lower bounds the true privacy parameter of the audited algorithm, and give impressive empirical results. Their work leaves open the question of how precisely one-run auditing can uncover the true privacy parameter of an algorithm, and how that precision depends on the audited algorithm. In this work, we characterize the maximum achievable efficacy of one-run auditing and show that one-run auditing can only perfectly uncover the true privacy parameters of algorithms whose structure allows the effects of individual data elements to be isolated. Our characterization helps reveal how and when one-run auditing is still a promising technique for auditing real machine learning algorithms, despite these fundamental gaps.
Problem

Research questions and friction points this paper is trying to address.

Characterize maximum efficacy of one-run privacy auditing.
Determine precision of one-run auditing in uncovering true privacy parameters.
Identify conditions where one-run auditing is effective for real algorithms.
Innovation

Methods, ideas, or system contributions that make the work stand out.

One-run auditing improves computational efficiency.
Characterizes maximum efficacy of one-run auditing.
Isolates effects of individual data elements.
🔎 Similar Papers
No similar papers found.
A
Amit Keinan
The Hebrew University of Jerusalem
M
Moshe Shenfeld
The Hebrew University of Jerusalem
Katrina Ligett
Katrina Ligett
Hebrew University