🤖 AI Summary
Current large language models (LLMs) lack robust, real-world psychiatric clinical evaluation benchmarks, hindering the development of domain-specialized models. To address this, we introduce PsychBench—the first clinical-driven benchmark for evaluating LLMs in psychiatry—covering core tasks including diagnostic reasoning and clinical note comprehension, and systematically assessing 16 state-of-the-art LLMs. Our method introduces a novel multi-dimensional evaluation framework integrating quantitative metrics, a hierarchical reader study involving 60 board-certified psychiatrists, chain-of-thought interpretability analysis, input-length robustness testing, domain-adaptation efficacy validation, and fine-grained clinical error attribution alongside human-AI collaboration assessment. Results indicate that while current LLMs are not yet suitable for autonomous clinical decision-making, they demonstrably enhance efficiency and diagnostic quality for junior clinicians. All datasets and evaluation code are publicly released to advance safe, clinically grounded AI deployment in psychiatry.
📝 Abstract
The advent of Large Language Models (LLMs) offers potential solutions to address problems such as shortage of medical resources and low diagnostic consistency in psychiatric clinical practice. Despite this potential, a robust and comprehensive benchmarking framework to assess the efficacy of LLMs in authentic psychiatric clinical environments is absent. This has impeded the advancement of specialized LLMs tailored to psychiatric applications. In response to this gap, by incorporating clinical demands in psychiatry and clinical data, we proposed a benchmarking system, PsychBench, to evaluate the practical performance of LLMs in psychiatric clinical settings. We conducted a comprehensive quantitative evaluation of 16 LLMs using PsychBench, and investigated the impact of prompt design, chain-of-thought reasoning, input text length, and domain-specific knowledge fine-tuning on model performance. Through detailed error analysis, we identified strengths and potential limitations of the existing models and suggested directions for improvement. Subsequently, a clinical reader study involving 60 psychiatrists of varying seniority was conducted to further explore the practical benefits of existing LLMs as supportive tools for psychiatrists of varying seniority. Through the quantitative and reader evaluation, we show that while existing models demonstrate significant potential, they are not yet adequate as decision-making tools in psychiatric clinical practice. The reader study further indicates that, as an auxiliary tool, LLM could provide particularly notable support for junior psychiatrists, effectively enhancing their work efficiency and overall clinical quality. To promote research in this area, we will make the dataset and evaluation framework publicly available, with the hope of advancing the application of LLMs in psychiatric clinical settings.