🤖 AI Summary
Deploying large language models (LLMs) for detecting positive thought disorder in schizophrenia spectrum disorders faces critical bottlenecks—privacy risks, high computational/financial costs, and opaque training data. Method: We propose a paradigm shift toward lightweight neural language models, quantifying speech disorganization via sliding-window perplexity on speech transcripts and leveraging compact Transformer architectures (e.g., DistilBERT, ALBERT). Contribution/Results: We identify a model-size inflection point: smaller models exhibit superior sensitivity to thought disorder and stronger generalization. On audio diary and clinical interview data, our approach achieves an AUC of 0.89—significantly outperforming LLaMA-3 and GPT-4—while reducing inference cost by 90%, enabling on-device deployment. This work provides the first empirical challenge to the “bigger is better” assumption in clinical NLP, establishing the clinical feasibility and distinct advantages of compact models for mental health AI.
📝 Abstract
Disorganized thinking is a key diagnostic indicator of schizophrenia-spectrum disorders. Recently, clinical estimates of the severity of disorganized thinking have been shown to correlate with measures of how difficult speech transcripts would be for large language models (LLMs) to predict. However, LLMs' deployment challenges -- including privacy concerns, computational and financial costs, and lack of transparency of training data -- limit their clinical utility. We investigate whether smaller neural language models can serve as effective alternatives for detecting positive formal thought disorder, using the same sliding window based perplexity measurements that proved effective with larger models. Surprisingly, our results show that smaller models are more sensitive to linguistic differences associated with formal thought disorder than their larger counterparts. Detection capability declines beyond a certain model size and context length, challenging the common assumption of ``bigger is better'' for LLM-based applications. Our findings generalize across audio diaries and clinical interview speech samples from individuals with psychotic symptoms, suggesting a promising direction for developing efficient, cost-effective, and privacy-preserving screening tools that can be deployed in both clinical and naturalistic settings.