AI Chatbots for Mental Health: Values and Harms from Lived Experiences of Depression

📅 2025-04-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates ethical risks associated with large language model (LLM)-driven mental health chatbots in depression self-management, particularly tensions between AI behaviors and users’ core values. Method: We conducted a value-sensitive design study with 17 individuals with lived experience of depression, employing a technical probe (Zenny, built on GPT-4o) and in-depth interviews. Results: We systematically identified five primary value aspirations—informational/emotional support, personalization, privacy preservation, crisis responsiveness, and user autonomy—and established a “values–AI harms–design interventions” mapping framework. Through qualitative thematic analysis, we uncovered five recurrent risk scenarios and derived 12 empirically grounded design principles. The work advances a human-centered, value-driven paradigm for ethical AI design and provides theoretical foundations and actionable guidelines for the safe, effective, and trustworthy deployment of AI in mental health care.

Technology Category

Application Category

📝 Abstract
Recent advancements in LLMs enable chatbots to interact with individuals on a range of queries, including sensitive mental health contexts. Despite uncertainties about their effectiveness and reliability, the development of LLMs in these areas is growing, potentially leading to harms. To better identify and mitigate these harms, it is critical to understand how the values of people with lived experiences relate to the harms. In this study, we developed a technology probe, a GPT-4o based chatbot called Zenny, enabling participants to engage with depression self-management scenarios informed by previous research. We used Zenny to interview 17 individuals with lived experiences of depression. Our thematic analysis revealed key values: informational support, emotional support, personalization, privacy, and crisis management. This work explores the relationship between lived experience values, potential harms, and design recommendations for mental health AI chatbots, aiming to enhance self-management support while minimizing risks.
Problem

Research questions and friction points this paper is trying to address.

Exploring AI chatbot values and harms in depression support
Assessing effectiveness and reliability of LLMs in mental health
Identifying design recommendations to minimize chatbot risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPT-4o based chatbot for depression scenarios
Thematic analysis of lived experience values
Design recommendations to minimize AI harms