🤖 AI Summary
This paper systematically surveys differential privacy (DP) auditing research, identifying three critical limitations in existing approaches: inefficiency, lack of end-to-end applicability, and loose (non-tight) guarantees. To address these, the authors propose the first unified evaluation framework encompassing all three dimensions—efficiency, end-to-end coverage, and tightness—by integrating threat models, attack paradigms, and evaluation functions. Grounded in a systematic literature review, formal modeling, and multidimensional comparative analysis, the study exposes fundamental blind spots and bottlenecks in prior work. Key contributions include: (1) the first cross-scenario DP auditing objective taxonomy; (2) a reusable, principled methodology for systematic DP audit evaluation; and (3) a standardized evaluation pipeline, a map of critical friction points, and actionable guidance for future research. Collectively, this work establishes foundational theoretical and practical support for advancing DP auditing toward rigorous, scalable, and deployable assurance.
📝 Abstract
This paper systematizes research on auditing Differential Privacy (DP) techniques, aiming to identify key insights into the current state of the art and open challenges. First, we introduce a comprehensive framework for reviewing work in the field and establish three cross-contextual desiderata that DP audits should target--namely, efficiency, end-to-end-ness, and tightness. Then, we systematize the modes of operation of state-of-the-art DP auditing techniques, including threat models, attacks, and evaluation functions. This allows us to highlight key details overlooked by prior work, analyze the limiting factors to achieving the three desiderata, and identify open research problems. Overall, our work provides a reusable and systematic methodology geared to assess progress in the field and identify friction points and future directions for our community to focus on.