🤖 AI Summary
This study investigates how radiologists appropriately calibrate trust and reliance on AI for prostate cancer diagnosis from MRI. Method: Two dual-phase controlled experiments systematically examine how the timing of AI prediction presentation and performance feedback—individual versus team-level accuracy—affect human-AI collaborative behavior and diagnostic accuracy. An interactive, pedagogy-informed diagnostic interface was developed to enable fine-grained behavioral tracking and multimodal integration. Contribution/Results: Human-AI collaboration consistently outperforms unassisted human diagnosis but remains below AI-only performance due to under-reliance. Preemptive display of AI predictions significantly increases clinician adherence. Critically, integrating multiple radiologists’ inputs with AI surpasses AI-only performance—demonstrating a novel pathway to synergistic human-AI augmentation. This work provides the first empirical validation in practicing clinical experts of performance feedback as a key moderator of AI trust, advancing evidence-based design of AI-assisted clinical decision support systems.
📝 Abstract
Despite the growing interest in human-AI decision making, experimental studies with domain experts remain rare, largely due to the complexity of working with domain experts and the challenges in setting up realistic experiments. In this work, we conduct an in-depth collaboration with radiologists in prostate cancer diagnosis based on MRI images. Building on existing tools for teaching prostate cancer diagnosis, we develop an interface and conduct two experiments to study how AI assistance and performance feedback shape the decision making of domain experts. In Study 1, clinicians were asked to provide an initial diagnosis (human), then view the AI's prediction, and subsequently finalize their decision (human-AI team). In Study 2 (after a memory wash-out period), the same participants first received aggregated performance statistics from Study 1, specifically their own performance, the AI's performance, and their human-AI team performance, and then directly viewed the AI's prediction before making their diagnosis (i.e., no independent initial diagnosis). These two workflows represent realistic ways that clinical AI tools might be used in practice, where the second study simulates a scenario where doctors can adjust their reliance and trust on AI based on prior performance feedback. Our findings show that, while human-AI teams consistently outperform humans alone, they still underperform the AI due to under-reliance, similar to prior studies with crowdworkers. Providing clinicians with performance feedback did not significantly improve the performance of human-AI teams, although showing AI decisions in advance nudges people to follow AI more. Meanwhile, we observe that the ensemble of human-AI teams can outperform AI alone, suggesting promising directions for human-AI collaboration.