🤖 AI Summary
This study investigates the practical roles and risks of large language models (LLMs) in non-clinical mental health support. Analyzing 2,147 authentic user–ChatGPT dialogues on mental health from Reddit, we employed NLP-driven qualitative content analysis, thematic modeling, and community discourse mining to systematically identify usage patterns and latent risks. Our contribution is twofold: first, we propose a novel, empirically grounded framework characterizing LLMs’ structured supportive functions—emotional support, therapeutic rehearsal, self-awareness scaffolding, and affective validation; second, we identify critical risks including inaccurate health advice, empathic overreach leading to misguidance, privacy violations, and inappropriate substitution of professional care. Results indicate high user appreciation for LLMs’ safety, immediacy, and accessibility, yet underscore urgent safety and integration challenges. We therefore propose a clinical integration pathway and a multi-tiered safety enhancement framework, offering an evidence-based foundation and methodological guidance for the responsible deployment of LLMs in mental health contexts.
📝 Abstract
We investigate the role of large language models (LLMs) in supporting mental health by analyzing Reddit posts and comments about mental health conversations with ChatGPT. Our findings reveal that users value ChatGPT as a safe, non-judgmental space, often favoring it over human support due to its accessibility, availability, and knowledgeable responses. ChatGPT provides a range of support, including actionable advice, emotional support, and validation, while helping users better understand their mental states. Additionally, we found that ChatGPT offers innovative support for individuals facing mental health challenges, such as assistance in navigating difficult conversations, preparing for therapy sessions, and exploring therapeutic interventions. However, users also voiced potential risks, including the spread of incorrect health advice, ChatGPT's overly validating nature, and privacy concerns. We discuss the implications of LLMs as tools for mental health support in both everyday health and clinical therapy settings and suggest strategies to mitigate risks in LLM-powered interactions.