🤖 AI Summary
Clinical decision support systems (CDSS) in data-driven healthcare suffer from poor interpretability and insufficient ethical governance, undermining clinician trust. Method: We propose a transparent, business-logic-driven multi-agent architecture that intrinsically integrates core ethical AI governance principles—autonomy, fairness, and accountability. Designed for intensive care, the system employs modular agents to independently analyze laboratory results, vital signs, and clinical context, enabling collaborative reasoning for interpretable predictions and result verification. Contribution/Results: Evaluated on the eICU dataset, our model significantly outperforms baseline methods in both predictive accuracy and explainability, while demonstrably increasing clinicians’ trust in AI-assisted decisions. To our knowledge, this is the first work to embed ethical AI governance directly into the architectural design of a multi-agent CDSS, establishing a reusable methodological framework and practical paradigm for building trustworthy, ethically grounded clinical AI systems.
📝 Abstract
In the age of data-driven medicine, it is paramount to include explainable and ethically managed artificial intelligence in explaining clinical decision support systems to achieve trustworthy and effective patient care. The focus of this paper is on a new architecture of a multi-agent system for clinical decision support that uses modular agents to analyze laboratory results, vital signs, and the clinical context and then integrates these results to drive predictions and validate outcomes. We describe our implementation with the eICU database to run lab-analysis-specific agents, vitals-only interpreters, and contextual reasoners and then run the prediction module and a validation agent. Everything is a transparent implementation of business logic, influenced by the principles of ethical AI governance such as Autonomy, Fairness, and Accountability. It provides visible results that this agent-based framework not only improves on interpretability and accuracy but also on reinforcing trust in AI-assisted decisions in an intensive care setting.