🤖 AI Summary
This study addresses the unresolved question of how to enhance the accuracy of large language models (LLMs) in assessing posttraumatic stress disorder (PTSD) severity under zero-shot settings, and how this performance is influenced by contextual knowledge and modeling strategies. Leveraging clinical narratives and self-reported PTSD scores from 1,437 individuals, we systematically evaluate 11 state-of-the-art LLMs across diverse prompting strategies, structured subscale prediction, output rescaling, and nine ensemble methods. We uncover, for the first time, systematic patterns in how contextual knowledge type, reasoning intensity, model scale, and ensemble design affect PTSD assessment performance. Notably, open-weight models plateau beyond 70B parameters, whereas closed-source models continue improving across generations. The optimal configuration—integrating construct definitions, narrative context, enhanced reasoning, and supervised–LLM fusion—significantly boosts assessment accuracy.
📝 Abstract
Large language models (LLMs) are increasingly being used in a zero-shot fashion to assess mental health conditions, yet we have limited knowledge on what factors affect their accuracy. In this study, we utilize a clinical dataset of natural language narratives and self-reported PTSD severity scores from 1,437 individuals to comprehensively evaluate the performance of 11 state-of-the-art LLMs. To understand the factors affecting accuracy, we systematically varied (i) contextual knowledge like subscale definitions, distribution summary, and interview questions, and (ii) modeling strategies including zero-shot vs few shot, amount of reasoning effort, model sizes, structured subscales vs direct scalar prediction, output rescaling and nine ensemble methods. Our findings indicate that (a) LLMs are most accurate when provided with detailed construct definitions and context of the narrative; (b) increased reasoning effort leads to better estimation accuracy; (c) performance of open-weight models (Llama, Deepseek), plateau beyond 70B parameters while closed-weight (o3-mini, gpt-5) models improve with newer generations; and (d) best performance is achieved when ensembling a supervised model with the zero-shot LLMs. Taken together, the results suggest choice of contextual knowledge and modeling strategies is important for deploying LLMs to accurately assess mental health.