🤖 AI Summary
Current mental health risk assessment relies excessively on subjective textual records, rendering predictions inconsistent and unreliable due to psychological uncertainty. To address this, we propose an active assessment framework that fuses objective behavioral data with subjective text. Our method introduces domain-adapted large language model (LLM) pretraining, a novel numerical behavioral data self-refinement module, and a causal chain-of-thought (Causal CoT) reasoning mechanism—enabling multimodal behavior-text joint modeling and causally interpretable inference. Evaluated on real-world datasets PMData and Globem, our approach significantly outperforms general-purpose LLMs: prediction consistency improves by 23.6%, interpretability is enhanced, and clinical deployability is demonstrated. This work establishes the first paradigm for mental health risk assessment that simultaneously ensures objectivity, causal transparency, and practical deployability.
📝 Abstract
Mental health risk is a critical global public health challenge, necessitating innovative and reliable assessment methods. With the development of large language models (LLMs), they stand out to be a promising tool for explainable mental health care applications. Nevertheless, existing approaches predominantly rely on subjective textual mental records, which can be distorted by inherent mental uncertainties, leading to inconsistent and unreliable predictions. To address these limitations, this paper introduces ProMind-LLM. We investigate an innovative approach integrating objective behavior data as complementary information alongside subjective mental records for robust mental health risk assessment. Specifically, ProMind-LLM incorporates a comprehensive pipeline that includes domain-specific pretraining to tailor the LLM for mental health contexts, a self-refine mechanism to optimize the processing of numerical behavioral data, and causal chain-of-thought reasoning to enhance the reliability and interpretability of its predictions. Evaluations of two real-world datasets, PMData and Globem, demonstrate the effectiveness of our proposed methods, achieving substantial improvements over general LLMs. We anticipate that ProMind-LLM will pave the way for more dependable, interpretable, and scalable mental health case solutions.