🤖 AI Summary
This study addresses the lack of standardized crisis classification, reliable annotation benchmarks, and clinically validated evaluation protocols for large language models (LLMs) in mental health crisis intervention. We propose the first clinically grounded, six-category unified crisis taxonomy; construct a diverse, multi-source expert-annotated evaluation dataset; and design a human evaluation protocol assessing response appropriateness, implicit risk detection, and contextual consistency. Empirical results show that commercial LLMs outperform open-weight models overall, with robust explicit crisis identification but systematic deficiencies in interpreting ambiguous expressions, adapting to clinical context, and detecting indirect risk signals—leading to numerous inappropriate or potentially harmful responses. Our contributions include (1) the first clinical practice–informed crisis classification framework, (2) a domain-specific expert annotation benchmark, and (3) a comprehensive, multidimensional evaluation protocol—establishing a methodological foundation and empirical basis for safe, trustworthy LLM–assisted psychological crisis intervention.
📝 Abstract
The widespread use of chatbots powered by large language models (LLMs) such as ChatGPT and Llama has fundamentally reshaped how people seek information and advice across domains. Increasingly, these chatbots are being used in high-stakes contexts, including emotional support and mental health concerns. While LLMs can offer scalable support, their ability to safely detect and respond to acute mental health crises remains poorly understood. Progress is hampered by the absence of unified crisis taxonomies, robust annotated benchmarks, and empirical evaluations grounded in clinical best practices. In this work, we address these gaps by introducing a unified taxonomy of six clinically-informed mental health crisis categories, curating a diverse evaluation dataset, and establishing an expert-designed protocol for assessing response appropriateness. We systematically benchmark three state-of-the-art LLMs for their ability to classify crisis types and generate safe, appropriate responses. The results reveal that while LLMs are highly consistent and generally reliable in addressing explicit crisis disclosures, significant risks remain. A non-negligible proportion of responses are rated as inappropriate or harmful, with responses generated by an open-weight model exhibiting higher failure rates than those generated by the commercial ones. We also identify systemic weaknesses in handling indirect or ambiguous risk signals, a reliance on formulaic and inauthentic default replies, and frequent misalignment with user context. These findings underscore the urgent need for enhanced safeguards, improved crisis detection, and context-aware interventions in LLM deployments. Our taxonomy, datasets, and evaluation framework lay the groundwork for ongoing research and responsible innovation in AI-driven mental health support, helping to minimize harm and better protect vulnerable users.