🤖 AI Summary
Reconstruction attacks in federated learning (FL) threaten model privacy, yet comparing the practical defense capabilities of differential privacy (DP) and its variants—such as metric privacy and Rényi DP—is hindered by semantic inconsistencies in their privacy parameters (e.g., ε, δ).
Method: We propose a unified evaluation framework that, for the first time, integrates Rényi DP analysis with Bayes capacity theory to establish two orthogonal, comparable criteria: (i) quantification of privacy strength via reconstruction risk, and (ii) cross-paradigm alignment of parameter semantics.
Contribution/Results: We systematically evaluate multiple DP variants against reconstruction attacks in realistic FL settings, revealing fundamental differences in their empirical robustness. Our framework delivers an interpretable, reproducible, and paradigm-agnostic quantitative methodology for benchmarking privacy mechanisms—providing both theoretical grounding and practical guidelines for selecting appropriate privacy-preserving techniques in FL.
📝 Abstract
Within the machine learning community, reconstruction attacks are a principal concern and have been identified even in federated learning (FL), which was designed with privacy preservation in mind. In response to these threats, the privacy community recommends the use of differential privacy (DP) in the stochastic gradient descent algorithm, termed DP-SGD. However, the proliferation of variants of DP in recent years extemdash such as metric privacy extemdash has made it challenging to conduct a fair comparison between different mechanisms due to the different meanings of the privacy parameters $epsilon$ and $delta$ across different variants. Thus, interpreting the practical implications of $epsilon$ and $delta$ in the FL context and amongst variants of DP remains ambiguous. In this paper, we lay a foundational framework for comparing mechanisms with differing notions of privacy guarantees, namely $(epsilon,delta)$-DP and metric privacy. We provide two foundational means of comparison: firstly, via the well-established $(epsilon,delta)$-DP guarantees, made possible through the R'enyi differential privacy framework; and secondly, via Bayes' capacity, which we identify as an appropriate measure for reconstruction threats.