🤖 AI Summary
In clinical AI deployment, opaque “oracle”-style systems exacerbate accountability ambiguity and incentivize defensive medicine, undermining responsible human-AI collaboration. Method: Employing a legal-policy analysis framework, this study introduces a two-dimensional comparative approach—cross-referencing AI automation levels with degrees of explainability—to systematically examine liability allocation among clinicians, healthcare institutions, and AI manufacturers. Contribution/Results: The paper establishes, for the first time, that explainability is not merely a technical feature but a constitutive institutional prerequisite for robust accountability frameworks. By reducing legal uncertainty and litigation risk, and by fostering calibrated human-AI trust, explainability mitigates defensive behavior and enables legally coherent, ethically grounded, and clinically viable AI-augmented decision-making. This reframing positions explainability as foundational to designing enforceable, interoperable, and socially legitimate clinical AI governance regimes.
📝 Abstract
Explainability, the capability of an artificial intelligence system (AIS) to explain its outcomes in a manner that is comprehensible to human beings at an acceptable level, has been deemed essential for critical sectors, such as healthcare. Is it really the case? In this perspective, we consider two extreme cases, ``Oracle'' (without explainability) versus ``AI Colleague'' (with explainability) for a thorough analysis. We discuss how the level of automation and explainability of AIS can affect the determination of liability among the medical practitioner/facility and manufacturer of AIS. We argue that explainability plays a crucial role in setting a responsibility framework in healthcare, from a legal standpoint, to shape the behavior of all involved parties and mitigate the risk of potential defensive medicine practices.