đ¤ AI Summary
This study investigates how cliniciansâ expectations regarding medical AI errors engender legal concerns and shape requirements for explainable AI (XAI) design. Method: Through semi-structured interviews with 10 practicing physicians, we adopt an interdisciplinary lens integrating medical ethics, legal liability, and humanâAI interaction to identify cliniciansâ blind spots concerning AI error attribution, evidentiary preservation, and decision traceability. Results: We find clinicians require not merely technical explanations but contextually embedded, clinically actionable onesâintegrated into electronic health records, audit logs, and operational workflowsâto support legal accountability. Accordingly, we propose a liability-driven XAI design framework that systematically incorporates legal practice requirementsâincluding auditability, responsibility signaling, and clinical context reconstructionâinto the explanation generation process. This framework enhances cliniciansâ trust in, perceived control over, and regulatory confidence in AI recommendations, advancing XAI from âtechnically interpretableâ to âlegally defensible.â
đ Abstract
Physicians are--and feel--ethically, professionally, and legally responsible for patient outcomes, buffering patients from harmful AI determinations from medical AI systems. Many have called for explainable AI (XAI) systems to help physicians incorporate medical AI recommendations into their workflows in a way that reduces the potential of harms to patients. While prior work has demonstrated how physicians' legal concerns impact their medical decision making, little work has explored how XAI systems should be designed in light of these concerns. In this study, we conducted interviews with 10 physicians to understand where and how they anticipate errors that may occur with a medical AI system and how these anticipated errors connect to their legal concerns. In our study, physicians anticipated risks associated with using an AI system for patient care, but voiced unknowns around how their legal risk mitigation strategies may change given a new technical system. Based on these findings, we describe the implications for designing XAI systems that can address physicians' legal concerns. Specifically, we identify the need to provide AI recommendations alongside contextual information that guides their risk mitigation strategies, including how non-legally related aspects of their systems, such as medical documentation and auditing requests, might be incorporated into a legal case.