Augmenting Clinical Decision-Making with an Interactive and Interpretable AI Copilot: A Real-World User Study with Clinicians in Nephrology and Obstetrics

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses clinicians’ distrust of black-box AI systems, which hinders their adoption in high-stakes clinical settings. To overcome this barrier, we propose AICare—an interactive, interpretable AI collaborator that integrates longitudinal electronic health record analysis, explainable visualizations, and large language models to dynamically generate risk predictions and diagnostic suggestions, supporting shared decision-making in nephrology and obstetrics. Our work innovatively establishes a trust mechanism centered on transparent interaction and reveals distinct collaboration strategies across clinician experience levels: junior clinicians use AICare as a cognitive scaffold, while experts employ it for adversarial validation. Empirical evaluations using NASA-TLX, SUS, and qualitative analyses demonstrate that AICare significantly reduces cognitive load, enhances task efficiency and user confidence, and effectively fosters trust while accommodating diverse clinical reasoning styles.

Technology Category

Application Category

📝 Abstract
Clinician skepticism toward opaque AI hinders adoption in high-stakes healthcare. We present AICare, an interactive and interpretable AI copilot for collaborative clinical decision-making. By analyzing longitudinal electronic health records, AICare grounds dynamic risk predictions in scrutable visualizations and LLM-driven diagnostic recommendations. Through a within-subjects counterbalanced study with 16 clinicians across nephrology and obstetrics, we comprehensively evaluated AICare using objective measures (task completion time and error rate), subjective assessments (NASA-TLX, SUS, and confidence ratings), and semi-structured interviews. Our findings indicate AICare's reduced cognitive workload. Beyond performance metrics, qualitative analysis reveals that trust is actively constructed through verification, with interaction strategies diverging by expertise: junior clinicians used the system as cognitive scaffolding to structure their analysis, while experts engaged in adversarial verification to challenge the AI's logic. This work offers design implications for creating AI systems that function as transparent partners, accommodating diverse reasoning styles to augment rather than replace clinical judgment.
Problem

Research questions and friction points this paper is trying to address.

clinical decision-making
AI transparency
clinician trust
interpretable AI
healthcare AI adoption
Innovation

Methods, ideas, or system contributions that make the work stand out.

interpretable AI
interactive AI copilot
clinical decision support
trust calibration
human-AI collaboration
🔎 Similar Papers
No similar papers found.
Yinghao Zhu
Yinghao Zhu
The University of Hong Kong
Data MiningAI for Healthcare
D
Dehao Sui
Peking University, Beijing, China
Zixiang Wang
Zixiang Wang
Peking University
AI for Healthcare
X
Xuning Hu
Hong Kong University of Science and Technology, Hong Kong, China
L
Lei Gu
Peking University, Beijing, China
Y
Yifan Qi
Peking University, Beijing, China
T
Tianchen Wu
Peking University Third Hospital, Beijing, China
L
Ling Wang
Affiliated Xuzhou Municipal Hospital of Xuzhou Medical University, Jiangsu, China
Y
Yuan Wei
Peking University Third Hospital, Beijing, China
W
Wen Tang
Peking University Third Hospital, Beijing, China
Z
Zhihan Cui
Peking University, Beijing, China
Y
Yasha Wang
Peking University, Beijing, China
Lequan Yu
Lequan Yu
Assistant Professor, The University of Hong Kong
Medical Image AnalysisMultimodal LearningComputational PathologyAI for Healthcare
Ewen M Harrison
Ewen M Harrison
Professor of Surgery and Data Science, University of Edinburgh
AIdata sciencesurgeryglobal healthliver and pancreas cancer
Junyi Gao
Junyi Gao
University of Edinburgh
Data MiningAI for healthcare
L
Liantao Ma
Peking University, Beijing, China