Developer Insights into Designing AI-Based Computer Perception Tools

📅 2025-08-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of effectively integrating AI-powered clinical perception tools into real-world healthcare workflows, focusing on balancing clinical utility, user acceptance, and system trustworthiness. Through in-depth interviews and inductive thematic analysis with 20 AI healthcare tool developers, we conducted a qualitative investigation to identify key design barriers impeding clinical adoption. Our analysis yields a novel conceptual framing—“developers as ethical stewards”—emphasizing their proactive role in embedding clinical knowledge and ethical responsibility throughout technical implementation. Based on this, we distill four interdisciplinary design priorities: (1) transparent, customizable decision logic; (2) clearly defined human–AI role boundaries; (3) progressive, context-aware user training pathways; and (4) clinically grounded explainability. The findings provide both a theoretical framework and actionable guidelines for enhancing clinical integration, trustworthiness, and responsible innovation of AI tools in healthcare settings.

Technology Category

Application Category

📝 Abstract
Artificial intelligence (AI)-based computer perception (CP) technologies use mobile sensors to collect behavioral and physiological data for clinical decision-making. These tools can reshape how clinical knowledge is generated and interpreted. However, effective integration of these tools into clinical workflows depends on how developers balance clinical utility with user acceptability and trustworthiness. Our study presents findings from 20 in-depth interviews with developers of AI-based CP tools. Interviews were transcribed and inductive, thematic analysis was performed to identify 4 key design priorities: 1) to account for context and ensure explainability for both patients and clinicians; 2) align tools with existing clinical workflows; 3) appropriately customize to relevant stakeholders for usability and acceptability; and 4) push the boundaries of innovation while aligning with established paradigms. Our findings highlight that developers view themselves as not merely technical architects but also ethical stewards, designing tools that are both acceptable by users and epistemically responsible (prioritizing objectivity and pushing clinical knowledge forward). We offer the following suggestions to help achieve this balance: documenting how design choices around customization are made, defining limits for customization choices, transparently conveying information about outputs, and investing in user training. Achieving these goals will require interdisciplinary collaboration between developers, clinicians, and ethicists.
Problem

Research questions and friction points this paper is trying to address.

Balancing clinical utility with user acceptability in AI tools
Integrating AI-based perception tools into clinical workflows
Ensuring explainability and customization for stakeholders
Innovation

Methods, ideas, or system contributions that make the work stand out.

Account for context and ensure explainability for stakeholders
Align tools with existing clinical workflows for integration
Appropriately customize for usability and stakeholder acceptability
🔎 Similar Papers
No similar papers found.
M
Maya Guhan
Center for Ethics and Health Policy, Baylor College of Medicine, Houston, TX
M
Meghan E. Hurley
Center for Ethics and Health Policy, Baylor College of Medicine, Houston, TX
E
Eric A. Storch
Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, TX
John Herrington
John Herrington
The Children's Hospital of Philadelphia
Autism Spectrum DisorderAnxietyMRIPsychophysiology
C
Casey Zampella
Department of Child and Adolescent Psychiatry and Behavioral Sciences, Children’s Hospital of Philadelphia, Philadelphia, PA
Julia Parish-Morris
Julia Parish-Morris
Associate Professor, University of Pennsylvania and Children's Hospital of Philadelphia
Child DevelopmentAutismLanguageSocial CognitionSex Differences
G
Gabriel Lázaro-Muñoz
Center for Bioethics, Harvard Medical School, Boston, MA
K
Kristin Kostick-Quenet
Center for Ethics and Health Policy, Baylor College of Medicine, Houston, TX