A Conditional Companion: Lived Experiences of People with Mental Health Disorders Using LLMs

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited understanding of how individuals with mental health conditions engage with large language models (LLMs) for psychological support in real-world contexts. Through semi-structured interviews with 20 participants in the UK diagnosed with mental health disorders, and employing reflexive thematic analysis, the research uncovers a pattern wherein users actively establish boundaries around LLM use based on their prior therapeutic experiences. Findings indicate that LLMs are perceived as suitable for mild to moderate emotional distress but exhibit significant limitations in crisis intervention, trauma processing, and complex socio-emotional scenarios. The study underscores “boundary awareness” as a core principle for integrating AI-based psychological support into care ecosystems and provides empirical grounding for responsible design and governance of such systems.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly used for mental health support, yet little is known about how people with mental health challenges engage with them, how they evaluate their usefulness, and what design opportunities they envision. We conducted 20 semi-structured interviews with people in the UK who live with mental health conditions and have used LLMs for mental health support. Through reflexive thematic analysis, we found that participants engaged with LLMs in conditional and situational ways: for immediacy, the desire for non-judgement, self-paced disclosure, cognitive reframing, and relational engagement. Simultaneously, participants articulated clear boundaries informed by prior therapeutic experience: LLMs were effective for mild-to-moderate distress but inadequate for crises, trauma, and complex social-emotional situations. We contribute empirical insights into the lived use of LLMs for mental health, highlight boundary-setting as central to their safe role, and propose design and governance directions for embedding them responsibly within care ecosystem.
Problem

Research questions and friction points this paper is trying to address.

mental health
Large Language Models
lived experience
human-AI interaction
digital mental health
Innovation

Methods, ideas, or system contributions that make the work stand out.

conditional companion
boundary-setting
lived experience
mental health support
LLM governance
🔎 Similar Papers
No similar papers found.