Between Help and Harm: An Evaluation of Mental Health Crisis Handling by LLMs

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of standardized crisis classification, reliable annotation benchmarks, and clinically validated evaluation protocols for large language models (LLMs) in mental health crisis intervention. We propose the first clinically grounded, six-category unified crisis taxonomy; construct a diverse, multi-source expert-annotated evaluation dataset; and design a human evaluation protocol assessing response appropriateness, implicit risk detection, and contextual consistency. Empirical results show that commercial LLMs outperform open-weight models overall, with robust explicit crisis identification but systematic deficiencies in interpreting ambiguous expressions, adapting to clinical context, and detecting indirect risk signals—leading to numerous inappropriate or potentially harmful responses. Our contributions include (1) the first clinical practice–informed crisis classification framework, (2) a domain-specific expert annotation benchmark, and (3) a comprehensive, multidimensional evaluation protocol—establishing a methodological foundation and empirical basis for safe, trustworthy LLM–assisted psychological crisis intervention.

Technology Category

Application Category

📝 Abstract
The widespread use of chatbots powered by large language models (LLMs) such as ChatGPT and Llama has fundamentally reshaped how people seek information and advice across domains. Increasingly, these chatbots are being used in high-stakes contexts, including emotional support and mental health concerns. While LLMs can offer scalable support, their ability to safely detect and respond to acute mental health crises remains poorly understood. Progress is hampered by the absence of unified crisis taxonomies, robust annotated benchmarks, and empirical evaluations grounded in clinical best practices. In this work, we address these gaps by introducing a unified taxonomy of six clinically-informed mental health crisis categories, curating a diverse evaluation dataset, and establishing an expert-designed protocol for assessing response appropriateness. We systematically benchmark three state-of-the-art LLMs for their ability to classify crisis types and generate safe, appropriate responses. The results reveal that while LLMs are highly consistent and generally reliable in addressing explicit crisis disclosures, significant risks remain. A non-negligible proportion of responses are rated as inappropriate or harmful, with responses generated by an open-weight model exhibiting higher failure rates than those generated by the commercial ones. We also identify systemic weaknesses in handling indirect or ambiguous risk signals, a reliance on formulaic and inauthentic default replies, and frequent misalignment with user context. These findings underscore the urgent need for enhanced safeguards, improved crisis detection, and context-aware interventions in LLM deployments. Our taxonomy, datasets, and evaluation framework lay the groundwork for ongoing research and responsible innovation in AI-driven mental health support, helping to minimize harm and better protect vulnerable users.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' safety in detecting mental health crises
Assessing risks of harmful responses in crisis interventions
Identifying weaknesses in handling ambiguous risk signals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed clinically-informed mental health crisis taxonomy
Created expert-designed protocol for response assessment
Benchmarked LLMs for crisis classification and safety
🔎 Similar Papers
No similar papers found.
Adrian Arnaiz-Rodriguez
Adrian Arnaiz-Rodriguez
ELLIS Alicante
Algorithmic FairnessTrustworthy AIGraph TheoryGraph Neural NetworksNetwork Science
M
Miguel Baidal
ELLIS Alicante, Alicante, Spain
Erik Derner
Erik Derner
CIIRC CTU in Prague
Generative AITrustworthy AIHuman-centric AIAI SafetyAI Security
J
Jenn Layton Annable
School of Health Science, The University of Nottingham, United Kingdom
M
Mark Ball
School of Psychology, Public Health and Social Care, Derby University, United Kingdom
M
Mark Ince
Mental Health Practitioner, Social Worker & Independent Scholar, United Kingdom
Elvira Perez Vallejos
Elvira Perez Vallejos
Professor Mental Health & Digital Technology, NHIR Biomedical Research Centre,The
Mental healthwelbeingdata ethicsresponsible innovation
N
Nuria Oliver
ELLIS Alicante, Alicante, Spain