🤖 AI Summary
This work addresses the critical gap in domain-specific trustworthiness evaluation for large language models (LLMs) in high-stakes mental health applications. To this end, we propose TrustMH-Bench—the first multidimensional benchmark tailored to mental health—translating professional clinical guidelines into eight quantifiable dimensions: reliability, crisis detection, safety, ethics, anti-sycophancy, privacy, and others. Through extensive experiments evaluating both general-purpose and specialized LLMs, we reveal significant deficiencies across multiple trustworthiness dimensions, even in state-of-the-art models such as GPT-5.1, underscoring the urgent need for more trustworthy mental health AI systems. The benchmark, along with its data and code, is publicly released to establish a foundational resource for domain-specific trustworthiness assessment in mental health.
📝 Abstract
While Large Language Models (LLMs) demonstrate significant potential in providing accessible mental health support, their practical deployment raises critical trustworthiness concerns due to the domains high-stakes and safety-sensitive nature. Existing evaluation paradigms for general-purpose LLMs fail to capture mental health-specific requirements, highlighting an urgent need to prioritize and enhance their trustworthiness. To address this, we propose TrustMH-Bench, a holistic framework designed to systematically quantify the trustworthiness of mental health LLMs. By establishing a deep mapping from domain-specific norms to quantitative evaluation metrics, TrustMH-Bench evaluates models across eight core pillars: Reliability, Crisis Identification and Escalation, Safety, Fairness, Privacy, Robustness, Anti-sycophancy, and Ethics. We conduct extensive experiments across six general-purpose LLMs and six specialized mental health models. Experimental results indicate that the evaluated models underperform across various trustworthiness dimensions in mental health scenarios, revealing significant deficiencies. Notably, even generally powerful models (e.g., GPT-5.1) fail to maintain consistently high performance across all dimensions. Consequently, systematically improving the trustworthiness of LLMs has become a critical task. Our data and code are released.