PsychBench: A comprehensive and professional benchmark for evaluating the performance of LLM-assisted psychiatric clinical practice

📅 2025-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) lack robust, real-world psychiatric clinical evaluation benchmarks, hindering the development of domain-specialized models. To address this, we introduce PsychBench—the first clinical-driven benchmark for evaluating LLMs in psychiatry—covering core tasks including diagnostic reasoning and clinical note comprehension, and systematically assessing 16 state-of-the-art LLMs. Our method introduces a novel multi-dimensional evaluation framework integrating quantitative metrics, a hierarchical reader study involving 60 board-certified psychiatrists, chain-of-thought interpretability analysis, input-length robustness testing, domain-adaptation efficacy validation, and fine-grained clinical error attribution alongside human-AI collaboration assessment. Results indicate that while current LLMs are not yet suitable for autonomous clinical decision-making, they demonstrably enhance efficiency and diagnostic quality for junior clinicians. All datasets and evaluation code are publicly released to advance safe, clinically grounded AI deployment in psychiatry.

Technology Category

Application Category

📝 Abstract
The advent of Large Language Models (LLMs) offers potential solutions to address problems such as shortage of medical resources and low diagnostic consistency in psychiatric clinical practice. Despite this potential, a robust and comprehensive benchmarking framework to assess the efficacy of LLMs in authentic psychiatric clinical environments is absent. This has impeded the advancement of specialized LLMs tailored to psychiatric applications. In response to this gap, by incorporating clinical demands in psychiatry and clinical data, we proposed a benchmarking system, PsychBench, to evaluate the practical performance of LLMs in psychiatric clinical settings. We conducted a comprehensive quantitative evaluation of 16 LLMs using PsychBench, and investigated the impact of prompt design, chain-of-thought reasoning, input text length, and domain-specific knowledge fine-tuning on model performance. Through detailed error analysis, we identified strengths and potential limitations of the existing models and suggested directions for improvement. Subsequently, a clinical reader study involving 60 psychiatrists of varying seniority was conducted to further explore the practical benefits of existing LLMs as supportive tools for psychiatrists of varying seniority. Through the quantitative and reader evaluation, we show that while existing models demonstrate significant potential, they are not yet adequate as decision-making tools in psychiatric clinical practice. The reader study further indicates that, as an auxiliary tool, LLM could provide particularly notable support for junior psychiatrists, effectively enhancing their work efficiency and overall clinical quality. To promote research in this area, we will make the dataset and evaluation framework publicly available, with the hope of advancing the application of LLMs in psychiatric clinical settings.
Problem

Research questions and friction points this paper is trying to address.

Lack of robust benchmarking for LLMs in psychiatric practice.
Need to evaluate LLM performance in clinical psychiatry settings.
Assessing LLMs as supportive tools for psychiatrists of varying seniority.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed PsychBench for LLM psychiatric evaluation.
Assessed 16 LLMs with clinical data integration.
Conducted reader study with 60 psychiatrists.
🔎 Similar Papers
No similar papers found.
R
Ruoxi Wang
School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
Shuyu Liu
Shuyu Liu
Professor in wheat breeding and genetics, Texas A&M University
wheatbreedinggeneticsgenomics
Ling Zhang
Ling Zhang
Alibaba DAMO Academy USA
Medical Image AnalysisMedical Image ComputingMachine LearningImage Processing
X
Xuequan Zhu
Beijing Key Laboratory of Mental Disorders, National Clinical Research Center for Mental Disorders, National Center for Mental Disorders, Beijing Anding Hospital, Capital Medical University, Beijing, 100088, China.; Advanced Innovation Center for Human Brain Protection, Capital Medical University, Beijing, 100088, China.
R
Rui Yang
Beijing Key Laboratory of Mental Disorders, National Clinical Research Center for Mental Disorders, National Center for Mental Disorders, Beijing Anding Hospital, Capital Medical University, Beijing, 100088, China.
X
Xinzhu Zhou
Beijing Key Laboratory of Mental Disorders, National Clinical Research Center for Mental Disorders, National Center for Mental Disorders, Beijing Anding Hospital, Capital Medical University, Beijing, 100088, China.
F
Fei Wu
School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
Z
Zhi Yang
Beijing Key Laboratory of Mental Disorders, National Clinical Research Center for Mental Disorders, National Center for Mental Disorders, Beijing Anding Hospital, Capital Medical University, Beijing, 100088, China.
C
Cheng Jin
School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.; Beijing Key Laboratory of Mental Disorders, National Clinical Research Center for Mental Disorders, National Center for Mental Disorders, Beijing Anding Hospital, Capital Medical University, Beijing, 100088, China.; Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China.
G
Gang Wang
Beijing Key Laboratory of Mental Disorders, National Clinical Research Center for Mental Disorders, National Center for Mental Disorders, Beijing Anding Hospital, Capital Medical University, Beijing, 100088, China.