Speculative Model Risk in Healthcare AI: Using Storytelling to Surface Unintended Harms

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Rapid advances in medical AI raise concerns regarding bias, privacy violations, and inequitable access; yet existing automated risk detection methods impede human understanding of underlying harm mechanisms and affected populations. This paper proposes a human-centered risk identification framework that innovatively integrates generative user stories with multi-agent simulation dialogues to systematically elicit collective reflection on potential societal harms and benefits prior to deployment. Unlike conventional technology-centric approaches, our framework leverages narrative-driven creative reasoning to broaden and balance hazard identification. Empirical results demonstrate that participants using this method exhibit more uniform response distributions across 13 harm categories, detect significantly more structural and contextual risks overlooked by automated tools, and meaningfully expand the boundaries of risk awareness.

Technology Category

Application Category

📝 Abstract
Artificial intelligence (AI) is rapidly transforming healthcare, enabling fast development of tools like stress monitors, wellness trackers, and mental health chatbots. However, rapid and low-barrier development can introduce risks of bias, privacy violations, and unequal access, especially when systems ignore real-world contexts and diverse user needs. Many recent methods use AI to detect risks automatically, but this can reduce human engagement in understanding how harms arise and who they affect. We present a human-centered framework that generates user stories and supports multi-agent discussions to help people think creatively about potential benefits and harms before deployment. In a user study, participants who read stories recognized a broader range of harms, distributing their responses more evenly across all 13 harm types. In contrast, those who did not read stories focused primarily on privacy and well-being (58.3%). Our findings show that storytelling helped participants speculate about a broader range of harms and benefits and think more creatively about AI's impact on users.
Problem

Research questions and friction points this paper is trying to address.

Addressing speculative risks in healthcare AI through human-centered storytelling
Identifying unintended harms from AI systems ignoring real-world contexts
Enhancing creative speculation about AI impacts before deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-centered framework generating user stories
Multi-agent discussions supporting creative speculation
Storytelling broadening harm recognition pre-deployment
🔎 Similar Papers
No similar papers found.
X
Xingmeng Zhao
The University of Texas at San Antonio
D
Dan Schumacher
The University of Texas at San Antonio
V
Veronica Rammouz
The University of Texas at San Antonio
Anthony Rios
Anthony Rios
Associate Professor in Information Systems and Cyber Security
Natural Language ProcessingBiomedical InformaticsComputational Social ScienceSocial Computing