The Typing Cure: Experiences with Large Language Model Chatbots for Mental Health Support

📅 2024-01-25
🏛️ arXiv.org
📈 Citations: 15
Influential: 1
📄 PDF
🤖 AI Summary
This study investigates how users across diverse global cultures employ LLM-based chatbots for daily mental health support, and the associated safety risks and cultural adaptation challenges. Using a qualitative approach—including 21 in-depth interviews, thematic analysis, and cross-cultural comparison—it grounds findings in empirical psychotherapy literature and human-AI interaction ethics frameworks. First, it introduces the concept of “therapeutic alignment,” defining a design principle that explicitly anchors AI behavior to core values of evidence-based psychotherapy. Second, it develops an ethics-informed design framework derived from authentic user narratives, bridging the theory–practice gap for general-purpose LLMs in mental health contexts. The study identifies five user-defined support roles (e.g., “emotional container,” “cognitive coach”) and proposes nine actionable design principles—covering safety boundary specification, culturally responsive interaction, and mechanisms for verifying therapeutic alignment.

Technology Category

Application Category

📝 Abstract
People experiencing severe distress increasingly use Large Language Model (LLM) chatbots as mental health support tools. Discussions on social media have described how engagements were lifesaving for some, but evidence suggests that general-purpose LLM chatbots also have notable risks that could endanger the welfare of users if not designed responsibly. In this study, we investigate the lived experiences of people who have used LLM chatbots for mental health support. We build on interviews with 21 individuals from globally diverse backgrounds to analyze how users create unique support roles for their chatbots, fill in gaps in everyday care, and navigate associated cultural limitations when seeking support from chatbots. We ground our analysis in psychotherapy literature around effective support, and introduce the concept of therapeutic alignment, or aligning AI with therapeutic values for mental health contexts. Our study offers recommendations for how designers can approach the ethical and effective use of LLM chatbots and other AI mental health support tools in mental health care.
Problem

Research questions and friction points this paper is trying to address.

Investigating user experiences with LLM chatbots for mental health support
Analyzing risks and benefits of general-purpose LLM chatbots in therapy
Proposing therapeutic alignment for ethical AI in mental health care
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interviews with diverse users for analysis
Introducing therapeutic alignment concept
Recommendations for ethical AI use
🔎 Similar Papers
No similar papers found.