Differential Privacy for Deep Learning in Medicine

📅 2025-05-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Differential privacy (DP) in medical deep learning faces unresolved tensions among privacy budget allocation, model utility, and subgroup fairness—particularly for non-imaging modalities (e.g., text, time-series) and vulnerable demographic subgroups. Method: We conduct a systematic review of 74 studies on DP applications in centralized and federated medical learning, analyzing trade-offs across DP-SGD, randomized response, and generative DP mechanisms. Contribution/Results: We reveal, for the first time, that strong DP constraints disproportionately degrade performance on non-imaging tasks and underrepresented subgroups, while imaging tasks remain relatively robust. Only 12% of reviewed studies perform explicit fairness evaluation. We identify the lack of fairness auditing as a critical blind spot in current DP-enabled medical AI. To address this, we propose a standardized subgroup fairness audit framework for multimodal healthcare data, grounded in metrics such as equalized odds. This framework provides empirical grounding and methodological guidance for jointly optimizing privacy, utility, and fairness.

Technology Category

Application Category

📝 Abstract
Differential privacy (DP) is a key technique for protecting sensitive patient data in medical deep learning (DL). As clinical models grow more data-dependent, balancing privacy with utility and fairness has become a critical challenge. This scoping review synthesizes recent developments in applying DP to medical DL, with a particular focus on DP-SGD and alternative mechanisms across centralized and federated settings. Using a structured search strategy, we identified 74 studies published up to March 2025. Our analysis spans diverse data modalities, training setups, and downstream tasks, and highlights the tradeoffs between privacy guarantees, model accuracy, and subgroup fairness. We find that while DP-especially at strong privacy budgets-can preserve performance in well-structured imaging tasks, severe degradation often occurs under strict privacy, particularly in underrepresented or complex modalities. Furthermore, privacy-induced performance gaps disproportionately affect demographic subgroups, with fairness impacts varying by data type and task. A small subset of studies explicitly addresses these tradeoffs through subgroup analysis or fairness metrics, but most omit them entirely. Beyond DP-SGD, emerging approaches leverage alternative mechanisms, generative models, and hybrid federated designs, though reporting remains inconsistent. We conclude by outlining key gaps in fairness auditing, standardization, and evaluation protocols, offering guidance for future work toward equitable and clinically robust privacy-preserving DL systems in medicine.
Problem

Research questions and friction points this paper is trying to address.

Balancing privacy and utility in medical deep learning
Addressing fairness impacts of differential privacy on subgroups
Evaluating DP-SGD and alternative mechanisms in clinical settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses DP-SGD for medical deep learning privacy
Explores hybrid federated designs in DL
Analyzes fairness impacts across demographic subgroups
🔎 Similar Papers
No similar papers found.