A Systematic Evaluation of Large Language Models for PTSD Severity Estimation: The Role of Contextual Knowledge and Modeling Strategies

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the unresolved question of how to enhance the accuracy of large language models (LLMs) in assessing posttraumatic stress disorder (PTSD) severity under zero-shot settings, and how this performance is influenced by contextual knowledge and modeling strategies. Leveraging clinical narratives and self-reported PTSD scores from 1,437 individuals, we systematically evaluate 11 state-of-the-art LLMs across diverse prompting strategies, structured subscale prediction, output rescaling, and nine ensemble methods. We uncover, for the first time, systematic patterns in how contextual knowledge type, reasoning intensity, model scale, and ensemble design affect PTSD assessment performance. Notably, open-weight models plateau beyond 70B parameters, whereas closed-source models continue improving across generations. The optimal configuration—integrating construct definitions, narrative context, enhanced reasoning, and supervised–LLM fusion—significantly boosts assessment accuracy.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly being used in a zero-shot fashion to assess mental health conditions, yet we have limited knowledge on what factors affect their accuracy. In this study, we utilize a clinical dataset of natural language narratives and self-reported PTSD severity scores from 1,437 individuals to comprehensively evaluate the performance of 11 state-of-the-art LLMs. To understand the factors affecting accuracy, we systematically varied (i) contextual knowledge like subscale definitions, distribution summary, and interview questions, and (ii) modeling strategies including zero-shot vs few shot, amount of reasoning effort, model sizes, structured subscales vs direct scalar prediction, output rescaling and nine ensemble methods. Our findings indicate that (a) LLMs are most accurate when provided with detailed construct definitions and context of the narrative; (b) increased reasoning effort leads to better estimation accuracy; (c) performance of open-weight models (Llama, Deepseek), plateau beyond 70B parameters while closed-weight (o3-mini, gpt-5) models improve with newer generations; and (d) best performance is achieved when ensembling a supervised model with the zero-shot LLMs. Taken together, the results suggest choice of contextual knowledge and modeling strategies is important for deploying LLMs to accurately assess mental health.
Problem

Research questions and friction points this paper is trying to address.

PTSD severity estimation
large language models
contextual knowledge
modeling strategies
zero-shot evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

large language models
PTSD severity estimation
contextual knowledge
modeling strategies
ensemble methods
🔎 Similar Papers
No similar papers found.
P
Panagiotis Kaliosis
Department of Computer Science, Stony Brook University, USA.
Adithya V Ganesan
Adithya V Ganesan
Stony Brook University
Natural Language ProcessingComputational Social Science
O
O. Kjell
Department of Psychology, Lund University, Sweden.
W
Whitney R. Ringwald
Department of Psychology, University of Minnesota, USA.
S
Scott Feltman
Department of Applied Mathematics and Statistics, Stony Brook University, USA.
M
Melissa A. Carr
Stony Brook World Trade Center Wellness Program, Renaissance School of Medicine at Stony Brook University, USA.
Dimitris Samaras
Dimitris Samaras
Stony Brook University
Computer VisionMachine LearningComputer GraphicsMedical Imaging
C
C. Ruggero
Department of Psychology, University of Texas at Dallas, USA.
B
Benjamin J. Luft
Stony Brook World Trade Center Wellness Program, Renaissance School of Medicine at Stony Brook University, USA.
Roman Kotov
Roman Kotov
Professor of Psychiatry, Stony Brook University
Psychiatric ClassificationPersonalityLongitudinal Studies of Mental Health
A
Andrew H. Schwartz
Department of Computer Science, Stony Brook University, USA.