🤖 AI Summary
This study addresses the challenge of effectively integrating AI-powered clinical perception tools into real-world healthcare workflows, focusing on balancing clinical utility, user acceptance, and system trustworthiness. Through in-depth interviews and inductive thematic analysis with 20 AI healthcare tool developers, we conducted a qualitative investigation to identify key design barriers impeding clinical adoption. Our analysis yields a novel conceptual framing—“developers as ethical stewards”—emphasizing their proactive role in embedding clinical knowledge and ethical responsibility throughout technical implementation. Based on this, we distill four interdisciplinary design priorities: (1) transparent, customizable decision logic; (2) clearly defined human–AI role boundaries; (3) progressive, context-aware user training pathways; and (4) clinically grounded explainability. The findings provide both a theoretical framework and actionable guidelines for enhancing clinical integration, trustworthiness, and responsible innovation of AI tools in healthcare settings.
📝 Abstract
Artificial intelligence (AI)-based computer perception (CP) technologies use mobile sensors to collect behavioral and physiological data for clinical decision-making. These tools can reshape how clinical knowledge is generated and interpreted. However, effective integration of these tools into clinical workflows depends on how developers balance clinical utility with user acceptability and trustworthiness. Our study presents findings from 20 in-depth interviews with developers of AI-based CP tools. Interviews were transcribed and inductive, thematic analysis was performed to identify 4 key design priorities: 1) to account for context and ensure explainability for both patients and clinicians; 2) align tools with existing clinical workflows; 3) appropriately customize to relevant stakeholders for usability and acceptability; and 4) push the boundaries of innovation while aligning with established paradigms. Our findings highlight that developers view themselves as not merely technical architects but also ethical stewards, designing tools that are both acceptable by users and epistemically responsible (prioritizing objectivity and pushing clinical knowledge forward). We offer the following suggestions to help achieve this balance: documenting how design choices around customization are made, defining limits for customization choices, transparently conveying information about outputs, and investing in user training. Achieving these goals will require interdisciplinary collaboration between developers, clinicians, and ethicists.