ProMind-LLM: Proactive Mental Health Care via Causal Reasoning with Sensor Data

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current mental health risk assessment relies excessively on subjective textual records, rendering predictions inconsistent and unreliable due to psychological uncertainty. To address this, we propose an active assessment framework that fuses objective behavioral data with subjective text. Our method introduces domain-adapted large language model (LLM) pretraining, a novel numerical behavioral data self-refinement module, and a causal chain-of-thought (Causal CoT) reasoning mechanism—enabling multimodal behavior-text joint modeling and causally interpretable inference. Evaluated on real-world datasets PMData and Globem, our approach significantly outperforms general-purpose LLMs: prediction consistency improves by 23.6%, interpretability is enhanced, and clinical deployability is demonstrated. This work establishes the first paradigm for mental health risk assessment that simultaneously ensures objectivity, causal transparency, and practical deployability.

Technology Category

Application Category

📝 Abstract
Mental health risk is a critical global public health challenge, necessitating innovative and reliable assessment methods. With the development of large language models (LLMs), they stand out to be a promising tool for explainable mental health care applications. Nevertheless, existing approaches predominantly rely on subjective textual mental records, which can be distorted by inherent mental uncertainties, leading to inconsistent and unreliable predictions. To address these limitations, this paper introduces ProMind-LLM. We investigate an innovative approach integrating objective behavior data as complementary information alongside subjective mental records for robust mental health risk assessment. Specifically, ProMind-LLM incorporates a comprehensive pipeline that includes domain-specific pretraining to tailor the LLM for mental health contexts, a self-refine mechanism to optimize the processing of numerical behavioral data, and causal chain-of-thought reasoning to enhance the reliability and interpretability of its predictions. Evaluations of two real-world datasets, PMData and Globem, demonstrate the effectiveness of our proposed methods, achieving substantial improvements over general LLMs. We anticipate that ProMind-LLM will pave the way for more dependable, interpretable, and scalable mental health case solutions.
Problem

Research questions and friction points this paper is trying to address.

Integrating objective behavior data with subjective mental records for reliable mental health assessment
Addressing inconsistent predictions caused by inherent mental uncertainties in existing methods
Enhancing reliability and interpretability of mental health risk predictions using causal reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Domain-specific pretraining for mental health contexts
Self-refine mechanism for numerical behavioral data
Causal chain-of-thought reasoning for reliable predictions
🔎 Similar Papers
No similar papers found.