Comparing privacy notions for protection against reconstruction attacks in machine learning

📅 2025-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reconstruction attacks in federated learning (FL) threaten model privacy, yet comparing the practical defense capabilities of differential privacy (DP) and its variants—such as metric privacy and Rényi DP—is hindered by semantic inconsistencies in their privacy parameters (e.g., ε, δ). Method: We propose a unified evaluation framework that, for the first time, integrates Rényi DP analysis with Bayes capacity theory to establish two orthogonal, comparable criteria: (i) quantification of privacy strength via reconstruction risk, and (ii) cross-paradigm alignment of parameter semantics. Contribution/Results: We systematically evaluate multiple DP variants against reconstruction attacks in realistic FL settings, revealing fundamental differences in their empirical robustness. Our framework delivers an interpretable, reproducible, and paradigm-agnostic quantitative methodology for benchmarking privacy mechanisms—providing both theoretical grounding and practical guidelines for selecting appropriate privacy-preserving techniques in FL.

Technology Category

Application Category

📝 Abstract
Within the machine learning community, reconstruction attacks are a principal concern and have been identified even in federated learning (FL), which was designed with privacy preservation in mind. In response to these threats, the privacy community recommends the use of differential privacy (DP) in the stochastic gradient descent algorithm, termed DP-SGD. However, the proliferation of variants of DP in recent years extemdash such as metric privacy extemdash has made it challenging to conduct a fair comparison between different mechanisms due to the different meanings of the privacy parameters $epsilon$ and $delta$ across different variants. Thus, interpreting the practical implications of $epsilon$ and $delta$ in the FL context and amongst variants of DP remains ambiguous. In this paper, we lay a foundational framework for comparing mechanisms with differing notions of privacy guarantees, namely $(epsilon,delta)$-DP and metric privacy. We provide two foundational means of comparison: firstly, via the well-established $(epsilon,delta)$-DP guarantees, made possible through the R'enyi differential privacy framework; and secondly, via Bayes' capacity, which we identify as an appropriate measure for reconstruction threats.
Problem

Research questions and friction points this paper is trying to address.

Compare privacy notions for reconstruction attack protection
Clarify practical implications of $ε$ and $δ$ in FL
Establish framework for comparing $(ε,δ)$-DP and metric privacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses DP-SGD for privacy
Compares (ε,δ)-DP with metric privacy
Applies Rényi framework for analysis
🔎 Similar Papers
No similar papers found.
S
Sayan Biswas
EPFL, Lausanne, Switzerland
M
M. Dras
Macquarie University, Sydney, Australia
Pedro Faustini
Pedro Faustini
Macquarie University
NLPDifferential PrivacyDeep Learning
Natasha Fernandes
Natasha Fernandes
Macquarie University
Differential PrivacyFormal MethodsQuantitative Information Flow
Annabelle McIver
Annabelle McIver
Macquarie University
C
C. Palamidessi
INRIA and École Polytechnique, Palaiseau, France
P
Parastoo Sadeghi
UNSW Canberra, Australia