🤖 AI Summary
Real-world differential privacy (DP) deployments suffer from insufficient and inconsistent disclosure of privacy parameters, hindering rigorous assessment of privacy guarantees and eroding public trust. To address this, we conducted in-depth interviews with 12 domain-expert practitioners across industry, academia, and policy, systematically identifying and defining a minimal, essential set of DP parameters that must be disclosed in practice. Based on this expert consensus, we propose the first human-centered, policy-informed DP privacy labeling framework. It standardizes the content, granularity, and presentation of disclosures, yielding an extensible, interpretable prototype label. Our framework bridges the critical gap between theoretical DP guarantees and real-world privacy claims by providing governments and organizations with a unified, trustworthy tool for transparent DP communication. It significantly enhances the comparability, interpretability, and credibility of DP statements—advancing standardization in practical privacy engineering.
📝 Abstract
The increasing adoption of differential privacy (DP) leads to public-facing DP deployments by both government agencies and companies. However, real-world DP deployments often do not fully disclose their privacy guarantees, which vary greatly between deployments. Failure to disclose certain DP parameters can lead to misunderstandings about the strength of the privacy guarantee, undermining the trust in DP. In this work, we seek to inform future standards for communicating the privacy guarantees of DP deployments. Based on semi-structured interviews with 12 DP experts, we identify important DP parameters necessary to comprehensively communicate DP guarantees, and describe why and how they should be disclosed. Based on expert recommendations, we design an initial privacy label for DP to comprehensively communicate privacy guarantees in a standardized format.