🤖 AI Summary
While AI applications in healthcare are proliferating, ambiguous accountability mechanisms have led to a misalignment between regulatory mandates (“what should be done”) and operational practice (“how to do it”), resulting in unclear responsibility attribution among stakeholders.
Method: This paper proposes a novel three-tier joint accountability framework for healthcare AI, the first to systematically stratify accountability according to behavioral patterns—namely, design, deployment, and use. It integrates conceptual analysis, normative modeling, and multi-stakeholder mechanism design, augmented by explainable AI (XAI) techniques to enhance transparency and cross-entity coordination.
Contribution/Results: The framework bridges the policy–practice gap by coherently aligning regulatory compliance with operational feasibility. It significantly improves decision transparency, strengthens traceability across the AI lifecycle, and enhances collaborative efficacy among clinicians, developers, regulators, and patients—thereby advancing trustworthy, accountable, and implementable AI governance in healthcare.
📝 Abstract
AI is transforming the healthcare domain and is increasingly helping practitioners to make health-related decisions. Therefore, accountability becomes a crucial concern for critical AI-driven decisions. Although regulatory bodies, such as the EU commission, provide guidelines, they are highlevel and focus on the ''what'' that should be done and less on the ''how'', creating a knowledge gap for actors. Through an extensive analysis, we found that the term accountability is perceived and dealt with in many different ways, depending on the actor's expertise and domain of work. With increasing concerns about AI accountability issues and the ambiguity around this term, this paper bridges the gap between the ''what'' and ''how'' of AI accountability, specifically for AI systems in healthcare. We do this by analysing the concept of accountability, formulating an accountability framework, and providing a three-tier structure for handling various accountability mechanisms. Our accountability framework positions the regulations of healthcare AI systems and the mechanisms adopted by the actors under a consistent accountability regime. Moreover, the three-tier structure guides the actors of the healthcare AI system to categorise the mechanisms based on their conduct. Through our framework, we advocate that decision-making in healthcare AI holds shared dependencies, where accountability should be dealt with jointly and should foster collaborations. We highlight the role of explainability in instigating communication and information sharing between the actors to further facilitate the collaborative process.