🤖 AI Summary
Current automated depression diagnosis faces two critical bottlenecks: (1) diagnostic decisions lack interpretability, undermining clinical trustworthiness; and (2) existing multimodal large language models (MLLMs) are not tailored to clinical interview data, limiting cross-modal reasoning performance. To address these, we propose MLlm-DR—a novel dual-interpretable framework that integrates a compact multimodal LLM with a lightweight query module (LQ-former), jointly generating depression severity scores and explicitly modeling the diagnostic reasoning pathway. The framework incorporates dedicated speech, linguistic, and visual feature extractors and is end-to-end fine-tuned on a high-quality, clinically grounded multimodal dataset derived from real-world psychiatric interviews. Evaluated on CMDC and E-DAIC-WOZ benchmarks, MLlm-DR achieves state-of-the-art accuracy—improving by 3.2% and 4.7%, respectively—while significantly enhancing diagnostic transparency and clinical applicability.
📝 Abstract
Automated depression diagnosis aims to analyze multimodal information from interview videos to predict participants' depression scores. Previous studies often lack clear explanations of how these scores were determined, limiting their adoption in clinical practice. While the advent of LLMs provides a possible pathway for explainable depression diagnosis, current LLMs capable of processing multimodal data lack training on interview data, resulting in poor diagnostic performance when used directly. In this paper, we propose a novel multimodal large language model (MLlm-DR) that can understand multimodal information inputs and supports explainable depression diagnosis. MLlm-DR integrates a smaller LLMs and a lightweight query module (LQ-former). Specifically, the smaller LLMs is designed to generate depression scores and corresponding evaluation rationales. To enhance its logical reasoning for domain-specific tasks while maintaining practicality, we constructed a robust training dataset to fine-tune it. Meanwhile, the LQ-former captures depression-related features from speech and visual data, aiding the model's ability to process multimodal information, to achieve comprehensive depression diagnosis. Our approach achieves state-of-the-art results on two interview-based benchmark datasets, CMDC and E-DAIC-WOZ, demonstrating its effectiveness and superiority.