Designing Beyond Language: Sociotechnical Barriers in AI Health Technologies for Limited English Proficiency

📅 2025-11-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how AI-enabled health technologies can address non-linguistic sociotechnical barriers—such as institutional workflow constraints, cultural misalignment, privacy concerns, and unstable technology access—faced by Spanish-speaking patients with limited English proficiency (LEP) in the U.S. healthcare system. Employing storyboard-driven qualitative interviews and integrating human-computer interaction (HCI) and sociotechnical systems analysis, the research examines real-world integration pathways for AI-powered translation and pre-visit preparation tools. Findings indicate that while AI mitigates communication and procedural barriers, it simultaneously introduces risks of misplaced trust and erodes clinician-patient rapport. The study proposes a novel “relationship-centered” AI design framework, emphasizing trust co-construction, digital literacy–aligned interfaces, minimal workflow disruption, and fidelity to existing care practices. This framework advances both theoretical understanding and practical implementation of inclusive, equity-oriented healthcare AI. (149 words)

Technology Category

Application Category

📝 Abstract
Limited English proficiency (LEP) patients in the U.S. face systemic barriers to healthcare beyond language and interpreter access, encompassing procedural and institutional constraints. AI advances may support communication and care through on-demand translation and visit preparation, but also risk exacerbating existing inequalities. We conducted storyboard-driven interviews with 14 patient navigators to explore how AI could shape care experiences for Spanish-speaking LEP individuals. We identified tensions around linguistic and cultural misunderstandings, privacy concerns, and opportunities and risks for AI to augment care workflows. Participants highlighted structural factors that can undermine trust in AI systems, including sensitive information disclosure, unstable technology access, and low digital literacy. While AI tools can potentially alleviate social barriers and institutional constraints, there are risks of misinformation and uprooting human camaraderie. Our findings contribute design considerations for AI that support LEP patients and care teams via rapport-building, education, and language support, and minimizing disruptions to existing practices.
Problem

Research questions and friction points this paper is trying to address.

AI health technologies risk exacerbating healthcare inequalities for limited English proficiency patients
Structural barriers undermine trust in AI systems through privacy concerns and technology access
AI tools create tensions between potential benefits and risks of misinformation in care
Innovation

Methods, ideas, or system contributions that make the work stand out.

Storyboard-driven interviews with patient navigators
AI design considerations for rapport-building and education
Minimizing disruptions to existing care practices
🔎 Similar Papers
No similar papers found.