Voices, Faces, and Feelings: Multi-modal Emotion-Cognition Captioning for Mental Health Understanding

๐Ÿ“… 2026-03-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limitations of existing approaches to multimodal mental health assessment, which often reduce the problem to coarse-grained classification tasks and fail to capture fine-grained emotional and cognitive cues or provide interpretability. To overcome these shortcomings, we introduce a novel taskโ€”Emotion-Cognition Multimodal Captioning (ECMC)โ€”that generates natural language descriptions explicitly linking multimodal signals to psychological states through emotion-cognition profiles. Our method employs modality-specific encoders, a Q-former-based dual-stream BridgeNet fusion module, contrastive learning for enhanced cross-modal alignment, and a LLaMA decoder to produce semantically coherent descriptions. Experimental results demonstrate that our approach outperforms current models on both objective metrics and human evaluations, significantly improving diagnostic accuracy and interpretability in mental health assessment.

Technology Category

Application Category

๐Ÿ“ Abstract
Emotional and cognitive factors are essential for understanding mental health disorders. However, existing methods often treat multi-modal data as classification tasks, limiting interpretability especially for emotion and cognition. Although large language models (LLMs) offer opportunities for mental health analysis, they mainly rely on textual semantics and overlook fine-grained emotional and cognitive cues in multi-modal inputs. While some studies incorporate emotional features via transfer learning, their connection to mental health conditions remains implicit. To address these issues, we propose ECMC, a novel task that aims at generating natural language descriptions of emotional and cognitive states from multi-modal data, and producing emotion-cognition profiles that improve both the accuracy and interpretability of mental health assessments. We adopt an encoder-decoder architecture, where modality-specific encoders extract features, which are fused by a dual-stream BridgeNet based on Q-former. Contrastive learning enhances the extraction of emotional and cognitive features. A LLaMA decoder then aligns these features with annotated captions to produce detailed descriptions. Extensive objective and subjective evaluations demonstrate that: 1) ECMC outperforms existing multi-modal LLMs and mental health models in generating emotion-cognition captions; 2) the generated emotion-cognition profiles significantly improve assistive diagnosis and interpretability in mental health analysis.
Problem

Research questions and friction points this paper is trying to address.

multi-modal
emotion-cognition
mental health
interpretability
captioning
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-modal emotion-cognition captioning
BridgeNet
contrastive learning
LLaMA-based decoder
interpretable mental health assessment
Zhiyuan Zhou
Zhiyuan Zhou
PhD student, UC Berkeley
RoboticsReinforcement Learning
Y
Yanrong Guo
Hefei University of Technology
S
Shijie Hao
Hefei University of Technology