🤖 AI Summary
Multimodal depression detection faces significant challenges due to modality inconsistency, interference from irrelevant information, and inter-individual variability, which hinder effective fusion. To address these issues, this work proposes the IDRL framework, which uniquely integrates modality alignment and individual differences within a unified model. Specifically, it disentangles multimodal representations into three distinct subspaces: a shared depression space, modality-specific depression spaces, and an irrelevant space. Furthermore, an Individual-Aware Fusion (IAF) module is introduced to dynamically adjust modality weights according to individual characteristics, enabling adaptive, person-specific fusion. This approach substantially enhances the extraction of depression-relevant signals and improves fusion robustness, achieving consistently superior and stable performance over existing methods in multimodal depression detection tasks.
📝 Abstract
Depression is a severe mental disorder, and reliable identification plays a critical role in early intervention and treatment. Multimodal depression detection aims to improve diagnostic performance by jointly modeling complementary information from multiple modalities. Recently, numerous multimodal learning approaches have been proposed for depression analysis; however, these methods suffer from the following limitations: 1) inter-modal inconsistency and depression-unrelated interference, where depression-related cues may conflict across modalities while substantial irrelevant content obscures critical depressive signals, and 2) diverse individual depressive presentations, leading to individual differences in modality and cue importance that hinder reliable fusion. To address these issues, we propose Individual-aware Multimodal Depression-related Representation Learning Framework (IDRL) for robust depression diagnosis. Specifically, IDRL 1) disentangles multimodal representations into a modality-common depression space, a modality-specific depression space, and a depression-unrelated space to enhance modality alignment while suppressing irrelevant information, and 2) introduces an individual-aware modality-fusion module (IAF) that dynamically adjusts the weights of disentangled depression-related features based on their predictive significance, thereby achieving adaptive cross-modal fusion for different individuals. Extensive experiments demonstrate that IDRL achieves superior and robust performance for multimodal depression detection.