🤖 AI Summary
Existing multi-agent medical decision-making systems lack effective mechanisms for human questioning and intervention, and reliance on explainability alone is insufficient to ensure trustworthiness, accountability, and meaningful human oversight. This work introduces the concept of “contestability” into multi-agent healthcare settings for the first time, proposing a novel framework grounded in contestability as a core design principle. The framework integrates human-in-the-loop participation, role-based contestation, and structured argumentation to support transparency, enable structured human intervention, and facilitate corrective actions. By doing so, it strengthens clinical accountability and human agency, offering both a theoretical foundation and a practical design pathway for enhancing trust, accountability, and human control in high-stakes medical environments involving multi-agent systems.
📝 Abstract
Multi-agent systems (MAS) are increasingly used in healthcare to support complex decision-making through collaboration among specialized agents. Because these systems act as collective decision-makers, they raise challenges for trust, accountability, and human oversight. Existing approaches to trustworthy AI largely rely on explainability, but explainability alone is insufficient in multi-agent settings, as it does not enable care partners to challenge or correct system outputs. To address this limitation, Contestable AI (CAI) characterizes systems that support effective human challenge throughout the decision-making lifecycle by providing transparency, structured opportunities for intervention, and mechanisms for review, correction, or override. This position paper argues that contestability is a necessary design requirement for trustworthy multi-agent algorithmic care systems. We identify key limitations in current MAS and Explainable AI (XAI) research and present a human-in-the-loop framework that integrates structured argumentation and role-based contestation to preserve human agency, clinical responsibility, and trust in high-stakes care contexts.