FedDAPL: Toward Client-Private Generalization in Federated Learning

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning (FL) in medical imaging suffers from domain shift induced by scanner heterogeneity, yet existing domain adaptation and generalization methods either require cross-site data access or violate FL’s privacy constraints. This paper proposes the first privacy-preserving federated domain adaptation framework: it integrates Domain-Adversarial Neural Networks (DANN) into the FL pipeline and introduces a proximal regularization strategy to stabilize distributed adversarial training, enabling client-local learning of domain-invariant representations. Critically, the method requires no sharing of raw data or any information beyond model gradients, fully complying with FL privacy requirements. Evaluated on the OpenBHB brain MRI dataset (15 training sites → 19 unseen test sites), it significantly outperforms FedAvg and Empirical Risk Minimization (ERM) baselines, substantially improving cross-site generalization. To our knowledge, this is the first approach achieving effective domain adaptation under strict FL constraints while preserving end-to-end privacy.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) trains models locally at each research center or clinic and aggregates only model updates, making it a natural fit for medical imaging, where strict privacy laws forbid raw data sharing. A major obstacle is scanner-induced domain shift: non-biological variations in hardware or acquisition protocols can cause models to fail on external sites. Most harmonization methods correct this shift by directly comparing data across sites, conflicting with FL's privacy constraints. Domain Generalization (DG) offers a privacy-friendly alternative - learning site-invariant representations without sharing raw data - but standard DG pipelines still assume centralized access to multi-site data, again violating FL's guarantees. This paper meets these difficulties with a straightforward integration of a Domain-Adversarial Neural Network (DANN) within the FL process. After demonstrating that a naive federated DANN fails to converge, we propose a proximal regularization method that stabilizes adversarial training among clients. Experiments on T1-weighted 3-D brain MRIs from the OpenBHB dataset, performing brain-age prediction on participants aged 6-64 y (mean 22+/-6 y; 45 percent male) in training and 6-79 y (mean 19+/-13 y; 55 percent male) in validation, show that training on 15 sites and testing on 19 unseen sites yields superior cross-site generalization over FedAvg and ERM while preserving data privacy.
Problem

Research questions and friction points this paper is trying to address.

Addressing scanner-induced domain shift in federated learning for medical imaging
Ensuring client data privacy while achieving cross-site generalization
Stabilizing adversarial training in federated learning without raw data sharing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates Domain-Adversarial Neural Network into FL
Uses proximal regularization to stabilize adversarial training
Enhances cross-site generalization while preserving data privacy
🔎 Similar Papers
No similar papers found.