🤖 AI Summary
This work addresses the privacy auditing of differential privacy (DP) algorithms for a single execution—bypassing the computational bottleneck of traditional methods requiring thousands of simulations.
Method: We propose a unified information-theoretic auditing framework that models privacy auditing as a bit transmission task over a noisy channel. Leveraging channel capacity, mutual information, statistical hypothesis testing, and empirical validation, we derive the first necessary and sufficient condition for single-execution auditability and establish a tight lower bound on the privacy parameter ε.
Contribution/Results: Our method significantly improves the precision of ε-lower bounds across mainstream DP mechanisms, reduces required observations by several orders of magnitude, and successfully detects privacy violations in flawed implementations. It provides both a rigorous theoretical foundation and a practical tool for efficient, verifiable privacy assurance.
📝 Abstract
Auditing algorithms' privacy typically involves simulating a game-based protocol that guesses which of two adjacent datasets was the original input. Traditional approaches require thousands of such simulations, leading to significant computational overhead. Recent methods propose single-run auditing of the target algorithm to address this, substantially reducing computational cost. However, these methods' general applicability and tightness in producing empirical privacy guarantees remain uncertain. This work studies such problems in detail. Our contributions are twofold: First, we introduce a unifying framework for privacy audits based on information-theoretic principles, modeling the audit as a bit transmission problem in a noisy channel. This formulation allows us to derive fundamental limits and develop an audit approach that yields tight privacy lower bounds for various DP protocols. Second, leveraging this framework, we demystify the method of privacy audit by one run, identifying the conditions under which single-run audits are feasible or infeasible. Our analysis provides general guidelines for conducting privacy audits and offers deeper insights into the privacy audit. Finally, through experiments, we demonstrate that our approach produces tighter privacy lower bounds on common differentially private mechanisms while requiring significantly fewer observations. We also provide a case study illustrating that our method successfully detects privacy violations in flawed implementations of private algorithms.