Metric Privacy in Federated Learning for Medical Imaging: Improving Convergence and Preventing Client Inference Attacks

📅 2025-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning for medical imaging, there exists a fundamental tension between privacy preservation—typically enforced via differential privacy (DP), which degrades model utility—and indistinguishability of client participation. Method: This work introduces metric privacy—a notion previously unexplored in federated aggregation—for the first time at the server-side aggregation step. We design a novel client inference attack tailored to federated learning and rigorously prove its resilience under the semi-honest threat model. We systematically evaluate privacy-utility trade-offs across six aggregation strategies. Results: On non-IID medical imaging tasks, our approach improves average model accuracy by 3.2% over standard DP while maintaining equivalent robustness against client participation inference attacks. The core contribution lies in the first application of metric privacy to federated aggregation, validated through both theoretical analysis and empirical evaluation.

Technology Category

Application Category

📝 Abstract
Federated learning is a distributed learning technique that allows training a global model with the participation of different data owners without the need to share raw data. This architecture is orchestrated by a central server that aggregates the local models from the clients. This server may be trusted, but not all nodes in the network. Then, differential privacy (DP) can be used to privatize the global model by adding noise. However, this may affect convergence across the rounds of the federated architecture, depending also on the aggregation strategy employed. In this work, we aim to introduce the notion of metric-privacy to mitigate the impact of classical server side global-DP on the convergence of the aggregated model. Metric-privacy is a relaxation of DP, suitable for domains provided with a notion of distance. We apply it from the server side by computing a distance for the difference between the local models. We compare our approach with standard DP by analyzing the impact on six classical aggregation strategies. The proposed methodology is applied to an example of medical imaging and different scenarios are simulated across homogeneous and non-i.i.d clients. Finally, we introduce a novel client inference attack, where a semi-honest client tries to find whether another client participated in the training and study how it can be mitigated using DP and metric-privacy. Our evaluation shows that metric-privacy can increase the performance of the model compared to standard DP, while offering similar protection against client inference attacks.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Medical Imaging
Privacy Preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Metric Privacy
Federated Learning
Differential Privacy Enhancement
🔎 Similar Papers
No similar papers found.
J
Judith S'ainz-Pardo D'iaz
Instituto de Física de Cantabria (IFCA), CSIC-UC, Avda. los Castros s/n. 39005 - Santander (Spain)
Andreas Athanasiou
Andreas Athanasiou
INRIA
Differential PrivacyLocation PrivacyQuantitative Information FlowFederated Learning
Kangsoo Jung
Kangsoo Jung
Postdoctoral Researcher, INRIA
Differential PrivacyGame TheoryMachine Learning
Catuscia Palamidessi
Catuscia Palamidessi
Inria
Differential privacymachine learningfairnessquantitative information flowconcurrency theory
'
'Alvaro L'opez Garc'ia
Instituto de Física de Cantabria (IFCA), CSIC-UC, Avda. los Castros s/n. 39005 - Santander (Spain)