$f$-Differential Privacy Filters: Validity and Approximate Solutions

πŸ“… 2026-02-06
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of bounding cumulative privacy loss under fully adaptive composition in the framework of $f$-differential privacy ($f$-DP). It provides the first characterization of when the natural $f$-DP privacy filter fails and establishes necessary and sufficient conditions for its validity. Building on this insight, the paper proposes an approximate Gaussian differential privacy filter tailored for both low and high subsampling rates, integrating $f$-DP theory, fully adaptive composition analysis, the central limit theorem, and subsampled Gaussian mechanisms. Empirical evaluations demonstrate that, when the subsampling rate $q < 0.2$ or $q > 0.8$, the proposed filter yields tighter privacy guarantees than existing methods based on RΓ©nyi differential privacy.

Technology Category

Application Category

πŸ“ Abstract
Accounting for privacy loss under fully adaptive composition -- where both the choice of mechanisms and their privacy parameters may depend on the entire history of prior outputs -- is a central challenge in differential privacy (DP). In this setting, privacy filters are stopping rules for compositions that ensure a prescribed global privacy budget is not exceeded. It remains unclear whether optimal trade-off-function-based notions, such as $f$-DP, admit valid privacy filters under fully adaptive interaction. We show that the natural approach to defining an $f$-DP filter -- composing individual trade-off curves and stopping when the prescribed $f$-DP curve is crossed -- is fundamentally invalid. We characterise when and why this failure occurs, and establish necessary and sufficient conditions under which the natural filter is valid. Furthermore, we prove a fully adaptive central limit theorem for $f$-DP and construct an approximate Gaussian DP filter for subsampled Gaussian mechanisms at small sampling rates $q<0.2$ and large sampling rates $q>0.8$, yielding tighter privacy guarantees than filters based on R\'enyi DP in the same setting.
Problem

Research questions and friction points this paper is trying to address.

differential privacy
f-DP
privacy filters
adaptive composition
privacy loss accounting
Innovation

Methods, ideas, or system contributions that make the work stand out.

f-Differential Privacy
Privacy Filters
Fully Adaptive Composition
Central Limit Theorem
Subsampled Gaussian Mechanism
πŸ”Ž Similar Papers
No similar papers found.
L
Long Tran
Department of Computer Science, University of Helsinki, Finland
Antti Koskela
Antti Koskela
Nokia Bell Labs
Machine LearningDifferential PrivacyNumerical Analysis
O
Ossi RΓ€isΓ€
Department of Computer Science, University of Helsinki, Finland
Antti Honkela
Antti Honkela
Professor, University of Helsinki
Machine LearningDifferential PrivacyBayesian InferenceBioinformatics#UnivHelsinkiCS