🤖 AI Summary
Depression automatic detection faces challenges including unsystematic feature representation, unclear multimodal fusion strategies, and inconsistent evaluation protocols. To address these, this work conducts the first systematic comparative study of speech, text, and electroencephalography (EEG) modalities, evaluating pretrained embeddings versus handcrafted features, diverse neural encoders, and attention-based fusion mechanisms. We propose an end-to-end framework integrating pretrained modality-specific embeddings with cross-modal attention fusion, explicitly elucidating EEG’s critical role in multimodal synergy. Additionally, we establish a reproducible, standardized benchmark for depression detection. Experiments demonstrate that trimodal fusion significantly improves performance over unimodal and bimodal baselines; pretrained embeddings consistently outperform handcrafted features; and our method achieves state-of-the-art results on mainstream datasets. These findings empirically validate the efficacy of cross-modal complementarity in depression recognition.
📝 Abstract
Depression is a widespread mental health disorder, yet its automatic detection remains challenging. Prior work has explored unimodal and multimodal approaches, with multimodal systems showing promise by leveraging complementary signals. However, existing studies are limited in scope, lack systematic comparisons of features, and suffer from inconsistent evaluation protocols. We address these gaps by systematically exploring feature representations and modelling strategies across EEG, together with speech and text. We evaluate handcrafted features versus pre-trained embeddings, assess the effectiveness of different neural encoders, compare unimodal, bimodal, and trimodal configurations, and analyse fusion strategies with attention to the role of EEG. Consistent subject-independent splits are applied to ensure robust, reproducible benchmarking. Our results show that (i) the combination of EEG, speech and text modalities enhances multimodal detection, (ii) pretrained embeddings outperform handcrafted features, and (iii) carefully designed trimodal models achieve state-of-the-art performance. Our work lays the groundwork for future research in multimodal depression detection.