Understanding Disclosure Risk in Differential Privacy with Applications to Noise Calibration and Auditing (Extended Version)

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of a unified and accurate risk metric for disclosure in existing differential privacy (DP) systems, which hinders effective noise calibration and auditing under realistic attacks. We propose “reconstruction advantage” as a unified measure of disclosure risk applicable to membership inference, attribute inference, and data reconstruction attacks. Our analysis reveals fundamental limitations of current approaches—such as ReRo—under realistic assumptions and establishes a theoretical framework for optimal attack strategies. By integrating differential privacy theory, adversarial risk modeling, and information-theoretic bounds, our method enables mechanism-agnostic derivation of tight privacy guarantees. This significantly enhances the accuracy and scope of DP auditing, yields improved utility–privacy trade-offs, and provides a principled foundation for configuring noise in DP systems.

Technology Category

Application Category

📝 Abstract
Differential Privacy (DP) is widely adopted in data management systems to enable data sharing with formal disclosure guarantees. A central systems challenge is understanding how DP noise translates into effective protection against inference attacks, since this directly determines achievable utility. Most existing analyses focus only on membership inference -- capturing only a threat -- or rely on reconstruction robustness (ReRo). However, under realistic assumptions, we show that ReRo can yield misleading risk estimates and violate claimed bounds, limiting their usefulness for principled DP calibration and auditing. This paper introduces reconstruction advantage, a unified risk metric that consistently captures risk across membership inference, attribute inference, and data reconstruction. We derive tight bounds that relate DP noise to adversarial advantage and characterize optimal adversarial strategies for arbitrary DP mechanisms and attacker knowledge. These results enable risk-driven noise calibration and provide a foundation for systematic DP auditing. We show that reconstruction advantage improves the accuracy and scope of DP auditing and enables more effective utility-privacy trade-offs in DP-enabled data management systems.
Problem

Research questions and friction points this paper is trying to address.

Disclosure Risk
Differential Privacy
Reconstruction Robustness
Inference Attacks
Privacy Auditing
Innovation

Methods, ideas, or system contributions that make the work stand out.

reconstruction advantage
differential privacy
disclosure risk
noise calibration
privacy auditing
🔎 Similar Papers
No similar papers found.