🤖 AI Summary
AI deployment in medicine faces a “translation gap,” primarily due to a technology-centric paradigm fundamentally misaligned with clinicians’ diagnostic reasoning and decision-making practices.
Method: This study proposes a socio-technical co-support framework anchored in physicians’ cognitive processes and clinical workflows, introducing a novel clinical-cognition–oriented AI support paradigm that prioritizes real-world utility over context-agnostic benchmark performance. Integrating medical anthropology, cognitive science, and explainable AI (XAI), we design a data-driven tool architecture aligned with clinical reasoning habits and operational constraints.
Contribution/Results: We establish a new evaluation framework for AI in healthcare—centered on clinical adaptability, explainability, and human-AI collaboration—thereby providing a systematic theoretical and practical guide for developing trustworthy, clinically integrated AI systems.
📝 Abstract
Artificial intelligence promises to revolutionise medicine, yet its impact remains limited because of the pervasive translational gap. We posit that the prevailing technology-centric approaches underpin this challenge, rendering such systems fundamentally incompatible with clinical practice, specifically diagnostic reasoning and decision making. Instead, we propose a novel sociotechnical conceptualisation of data-driven support tools designed to complement doctors' cognitive and epistemic activities. Crucially, it prioritises real-world impact over superhuman performance on inconsequential benchmarks.