Accountability Framework for Healthcare AI Systems: Towards Joint Accountability in Decision Making

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
While AI applications in healthcare are proliferating, ambiguous accountability mechanisms have led to a misalignment between regulatory mandates (“what should be done”) and operational practice (“how to do it”), resulting in unclear responsibility attribution among stakeholders. Method: This paper proposes a novel three-tier joint accountability framework for healthcare AI, the first to systematically stratify accountability according to behavioral patterns—namely, design, deployment, and use. It integrates conceptual analysis, normative modeling, and multi-stakeholder mechanism design, augmented by explainable AI (XAI) techniques to enhance transparency and cross-entity coordination. Contribution/Results: The framework bridges the policy–practice gap by coherently aligning regulatory compliance with operational feasibility. It significantly improves decision transparency, strengthens traceability across the AI lifecycle, and enhances collaborative efficacy among clinicians, developers, regulators, and patients—thereby advancing trustworthy, accountable, and implementable AI governance in healthcare.

Technology Category

Application Category

📝 Abstract
AI is transforming the healthcare domain and is increasingly helping practitioners to make health-related decisions. Therefore, accountability becomes a crucial concern for critical AI-driven decisions. Although regulatory bodies, such as the EU commission, provide guidelines, they are highlevel and focus on the ''what'' that should be done and less on the ''how'', creating a knowledge gap for actors. Through an extensive analysis, we found that the term accountability is perceived and dealt with in many different ways, depending on the actor's expertise and domain of work. With increasing concerns about AI accountability issues and the ambiguity around this term, this paper bridges the gap between the ''what'' and ''how'' of AI accountability, specifically for AI systems in healthcare. We do this by analysing the concept of accountability, formulating an accountability framework, and providing a three-tier structure for handling various accountability mechanisms. Our accountability framework positions the regulations of healthcare AI systems and the mechanisms adopted by the actors under a consistent accountability regime. Moreover, the three-tier structure guides the actors of the healthcare AI system to categorise the mechanisms based on their conduct. Through our framework, we advocate that decision-making in healthcare AI holds shared dependencies, where accountability should be dealt with jointly and should foster collaborations. We highlight the role of explainability in instigating communication and information sharing between the actors to further facilitate the collaborative process.
Problem

Research questions and friction points this paper is trying to address.

Addressing accountability ambiguity in healthcare AI systems
Bridging the gap between regulatory guidelines and practical implementation
Establishing joint accountability framework for AI-assisted medical decisions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed a three-tier accountability framework structure
Established joint accountability mechanisms for collaborative decision-making
Integrated explainability to facilitate communication between system actors
🔎 Similar Papers
No similar papers found.
P
Prachi Bagave
Delft University of Technology, The Netherlands
M
Marcus Westberg
Delft University of Technology, The Netherlands
Marijn Janssen
Marijn Janssen
Professor in ICT & Governance, Faculty of Technology, Policy and Management, Delft University of
ICT and GovernanceICT-infrastructureopen dataAIopen government
A
Aaron Yi Ding
Delft University of Technology, The Netherlands