🤖 AI Summary
This work formally analyzes the privacy guarantees of Google’s Privacy Sandbox APIs—Private Aggregation API (PAA) and Attribution Reporting API (ARA)—under differential privacy (DP) in the realistic, challenging setting of *interactive queries* and *dynamic database updates*. We first construct an abstract semantic model of the APIs, explicitly formalizing query strategies and database evolution as response-dependent processes. Under reasonable assumptions, we rigorously prove that the joint mechanism formed by PAA and ARA satisfies $(varepsilon,delta)$-differential privacy. Our analysis breaks from classical DP frameworks, which typically assume static databases and non-interactive queries. This is the first formal verification establishing end-to-end DP guarantees for Privacy Sandbox’s advertising measurement infrastructure. By bridging theory and practice, our work provides a verifiable, mathematically grounded privacy assurance for industrial-scale privacy-preserving APIs—filling a critical gap in the formal verification of real-world privacy mechanisms.
📝 Abstract
The Privacy Sandbox initiative from Google includes APIs for enabling privacy-preserving advertising functionalities as part of the effort around limiting third-party cookies. In particular, the Private Aggregation API (PAA) and the Attribution Reporting API (ARA) can be used for ad measurement while providing different guardrails for safeguarding user privacy, including a framework for satisfying differential privacy (DP). In this work, we provide an abstract model for analyzing the privacy of these APIs and show that they satisfy a formal DP guarantee under certain assumptions. Our analysis handles the case where both the queries and database can change interactively based on previous responses from the API.