Implications of Current Litigation on the Design of AI Systems for Healthcare Delivery

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the absence of robust accountability mechanisms in cases of harm caused by medical AI, identifying multi-stakeholder causation patterns and legal redress challenges through systematic analysis of 31 real-world litigation cases. Method: It pioneers the integration of rigorous legal analysis into explainable AI (XAI) design, proposing a healthcare-specific AI accountability framework grounded in a multi-party responsibility structure. Contribution/Results: First, it reconstructs a patient-centered, tiered liability system encompassing AI developers, deployers, and clinical end-users. Second, it designs an XAI system tailored for legal representatives—enhancing evidentiary output quality and traceability of causal attribution—to empower patients in asserting rights effectively. Collectively, the work bridges the gap between technical explainability and judicial accountability, offering both theoretical foundations and actionable implementation pathways for responsible medical AI governance.

Technology Category

Application Category

📝 Abstract
Many calls for explainable AI (XAI) systems in medicine are tied to a desire for AI accountability--accounting for, mitigating, and ultimately preventing harms from AI systems. Because XAI systems provide human-understandable explanations for their output, they are often viewed as a primary path to prevent harms to patients. However, when harm occurs, laws, policies, and regulations also shape AI accountability by impacting how harmed individuals can obtain recourse. Current approaches to XAI explore physicians' medical and relational needs to counter harms to patients, but there is a need to understand how XAI systems should account for the legal considerations of those impacted. We conduct an analysis of 31 legal cases and reported harms to identify patterns around how AI systems impact patient care. Our findings reflect how patients' medical care relies on a complex web of stakeholders--physicians, state health departments, health insurers, care facilities, among others--and many AI systems deployed across their healthcare delivery negatively impact their care. In response, patients have had no option but to seek legal recourse for harms. We shift the frame from physician-centered to patient-centered accountability approaches by describing how lawyers and technologists need to recognize and address where AI harms happen. We present paths for preventing or countering harm (1) by changing liability structures to reflect the role of many stakeholders in shaping how AI systems impact patient care; and (2) by designing XAI systems that can help advocates, such as legal representatives, who provide critical legal expertise and practically support recourse for patients.
Problem

Research questions and friction points this paper is trying to address.

Address legal considerations in XAI systems for healthcare accountability
Analyze AI impact on patient care through legal case patterns
Shift from physician-centered to patient-centered AI harm prevention
Innovation

Methods, ideas, or system contributions that make the work stand out.

XAI systems for legal considerations in healthcare
Patient-centered accountability in AI design
Liability structures for multi-stakeholder AI impacts
🔎 Similar Papers
No similar papers found.