Federated learning, ethics, and the double black box problem in medical AI

📅 2025-04-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses an emerging ethical risk in healthcare federated learning (FL)—“federated opacity”—characterized by a “dual black box”: the inscrutability of both model logic and distributed, siloed medical data. This opacity exacerbates challenges in clinical interpretability, accountability attribution, and informed consent. Adopting an interdisciplinary approach that integrates AI technical principles with medical ethics, the study develops a novel ethical analysis framework and conceptual model to critically examine the often-overstated privacy preservation and collaborative benefits of FL in clinical settings. It provides the first rigorous articulation of the dual black box structure unique to healthcare FL, thereby bridging a critical gap in medical AI ethics research. The work identifies core barriers to ethical feasibility and proposes a governance pathway tailored for clinical deployment. By doing so, it advances both theoretical foundations and practical guidance for responsible, ethically grounded healthcare AI. (149 words)

Technology Category

Application Category

📝 Abstract
Federated learning (FL) is a machine learning approach that allows multiple devices or institutions to collaboratively train a model without sharing their local data with a third-party. FL is considered a promising way to address patient privacy concerns in medical artificial intelligence. The ethical risks of medical FL systems themselves, however, have thus far been underexamined. This paper aims to address this gap. We argue that medical FL presents a new variety of opacity -- federation opacity -- that, in turn, generates a distinctive double black box problem in healthcare AI. We highlight several instances in which the anticipated benefits of medical FL may be exaggerated, and conclude by highlighting key challenges that must be overcome to make FL ethically feasible in medicine.
Problem

Research questions and friction points this paper is trying to address.

Examining ethical risks in federated learning for medical AI
Addressing federation opacity and double black box problem
Evaluating exaggerated benefits of medical federated learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated learning enables collaborative model training
Addresses patient privacy in medical AI
Identifies federation opacity ethical risks
🔎 Similar Papers
No similar papers found.
Joshua Hatherley
Joshua Hatherley
Postdoctoral Fellow, Center for the Philosophy of AI, University of Copenhagen
AI ethicsAI safetybioethicsdata ethicsphilosophy of technology
A
Anders Sogaard
Center for the Philosophy of AI, University of Copenhagen, Denmark; Department of Communication, University of Copenhagen, Denmark; Department of Computer Science, University of Copenhagen, Denmark
A
Angela Ballantyne
Department of Primary Health Care and General Practice, University of Otago, New Zealand
Ruben Pauwels
Ruben Pauwels
Associate Professor, Department of Dentistry and Oral Health, Aarhus University
Medical ImagingMedical PhysicsImage ProcessingArtificial IntelligenceDeep Learning