Language Shapes Mental Health Evaluations in Large Language Models

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates systematic discrepancies in mental health assessments by large language models (LLMs) when prompted in Chinese versus English, and their downstream decision-making implications. By comparing GPT-4o and Qwen3 across multidimensional stigma scales—encompassing social, self, and professional domains—and a depression severity classification task, the work reveals for the first time that the choice of prompt language significantly alters model behavior: Chinese prompts elicit higher levels of stigmatizing attitudes, reduced sensitivity to stigma, and greater underestimation of depression severity. These findings challenge the prevailing assumption of cross-lingual consistency in LLMs and underscore the critical role of language selection in AI applications for mental health, where subtle linguistic differences can substantially influence clinical judgments and ethical outcomes.

Technology Category

Application Category

📝 Abstract
This study investigates whether large language models (LLMs) exhibit cross-linguistic differences in mental health evaluations. Focusing on Chinese and English, we examine two widely used models, GPT-4o and Qwen3, to assess whether prompt language systematically shifts mental health-related evaluations and downstream decision outcomes. First, we assess models'evaluative orientation toward mental health stigma using multiple validated measurement scales capturing social stigma, self-stigma, and professional stigma. Across all measures, both models produce higher stigma-related responses when prompted in Chinese than in English. Second, we examine whether these differences also manifest in two common downstream decision tasks in mental health. In a binary mental health stigma detection task, sensitivity to stigmatizing content varies across language prompts, with lower sensitivity observed under Chinese prompts. In a depression severity classification task, predicted severity also differs by prompt language, with Chinese prompts associated with more underestimation errors, indicating a systematic downward shift in predicted severity relative to English prompts. Together, these findings suggest that language context can systematically shape evaluative patterns in LLM outputs and shift decision thresholds in downstream tasks.
Problem

Research questions and friction points this paper is trying to address.

large language models
mental health evaluation
cross-linguistic differences
stigma
language bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

cross-linguistic bias
mental health stigma
large language models
prompt language effect
depression severity classification
🔎 Similar Papers
No similar papers found.