🤖 AI Summary
This work addresses the limitations of current ethical evaluations of large language models (LLMs) in mental health applications, which overly rely on refusal rates and fail to capture clinically essential qualities such as empathy and professional conduct. To bridge this gap, the authors introduce PsychEthicsBench—the first multidimensional evaluation benchmark grounded in Australian psychological and psychiatric ethics guidelines. It systematically assesses models’ ethical knowledge and behavioral responses through multiple-choice and open-ended tasks, augmented with fine-grained human annotations. Evaluations across 14 models reveal a significant disconnect between refusal rates and actual ethical behavior, and further demonstrate that domain-specific fine-tuning can sometimes undermine ethical consistency, thereby highlighting the inadequacy of conventional safety metrics.
📝 Abstract
The increasing integration of large language models (LLMs) into mental health applications necessitates robust frameworks for evaluating professional safety alignment. Current evaluative approaches primarily rely on refusal-based safety signals, which offer limited insight into the nuanced behaviors required in clinical practice. In mental health, clinically inadequate refusals can be perceived as unempathetic and discourage help-seeking. To address this gap, we move beyond refusal-centric metrics and introduce \texttt{PsychEthicsBench}, the first principle-grounded benchmark based on Australian psychology and psychiatry guidelines, designed to evaluate LLMs'ethical knowledge and behavioral responses through multiple-choice and open-ended tasks with fine-grained ethicality annotations. Empirical results across 14 models reveal that refusal rates are poor indicators of ethical behavior, revealing a significant divergence between safety triggers and clinical appropriateness. Notably, we find that domain-specific fine-tuning can degrade ethical robustness, as several specialized models underperform their base backbones in ethical alignment. PsychEthicsBench provides a foundation for systematic, jurisdiction-aware evaluation of LLMs in mental health, encouraging more responsible development in this domain.