MLlm-DR: Towards Explainable Depression Recognition with MultiModal Large Language Models

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current automated depression diagnosis faces two critical bottlenecks: (1) diagnostic decisions lack interpretability, undermining clinical trustworthiness; and (2) existing multimodal large language models (MLLMs) are not tailored to clinical interview data, limiting cross-modal reasoning performance. To address these, we propose MLlm-DR—a novel dual-interpretable framework that integrates a compact multimodal LLM with a lightweight query module (LQ-former), jointly generating depression severity scores and explicitly modeling the diagnostic reasoning pathway. The framework incorporates dedicated speech, linguistic, and visual feature extractors and is end-to-end fine-tuned on a high-quality, clinically grounded multimodal dataset derived from real-world psychiatric interviews. Evaluated on CMDC and E-DAIC-WOZ benchmarks, MLlm-DR achieves state-of-the-art accuracy—improving by 3.2% and 4.7%, respectively—while significantly enhancing diagnostic transparency and clinical applicability.

Technology Category

Application Category

📝 Abstract
Automated depression diagnosis aims to analyze multimodal information from interview videos to predict participants' depression scores. Previous studies often lack clear explanations of how these scores were determined, limiting their adoption in clinical practice. While the advent of LLMs provides a possible pathway for explainable depression diagnosis, current LLMs capable of processing multimodal data lack training on interview data, resulting in poor diagnostic performance when used directly. In this paper, we propose a novel multimodal large language model (MLlm-DR) that can understand multimodal information inputs and supports explainable depression diagnosis. MLlm-DR integrates a smaller LLMs and a lightweight query module (LQ-former). Specifically, the smaller LLMs is designed to generate depression scores and corresponding evaluation rationales. To enhance its logical reasoning for domain-specific tasks while maintaining practicality, we constructed a robust training dataset to fine-tune it. Meanwhile, the LQ-former captures depression-related features from speech and visual data, aiding the model's ability to process multimodal information, to achieve comprehensive depression diagnosis. Our approach achieves state-of-the-art results on two interview-based benchmark datasets, CMDC and E-DAIC-WOZ, demonstrating its effectiveness and superiority.
Problem

Research questions and friction points this paper is trying to address.

Develop explainable depression diagnosis using multimodal data
Address lack of training in current LLMs for interview data
Enhance multimodal feature capture for accurate depression recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal LLM integrates speech and visual features
Lightweight query module enhances logical reasoning
Robust dataset fine-tunes explainable depression diagnosis
🔎 Similar Papers
No similar papers found.
W
Wei Zhang
National University of Defense Technology, China
Juan Chen
Juan Chen
School of Psychology, South China Normal University
psychologyneuroscience
E
En Zhu
National University of Defense Technology, China
W
Wenhong Cheng
Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, China
Y
YunPeng Li
Nanjing Industria Tenebris Information Technology Co., Ltd, China
Y
Yanbo J. Wang
National University of Uzbekistan named after Mirzo Ulugbek, Uzbekistan