TRI-DEP: A Trimodal Comparative Study for Depression Detection Using Speech, Text, and EEG

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Depression automatic detection faces challenges including unsystematic feature representation, unclear multimodal fusion strategies, and inconsistent evaluation protocols. To address these, this work conducts the first systematic comparative study of speech, text, and electroencephalography (EEG) modalities, evaluating pretrained embeddings versus handcrafted features, diverse neural encoders, and attention-based fusion mechanisms. We propose an end-to-end framework integrating pretrained modality-specific embeddings with cross-modal attention fusion, explicitly elucidating EEG’s critical role in multimodal synergy. Additionally, we establish a reproducible, standardized benchmark for depression detection. Experiments demonstrate that trimodal fusion significantly improves performance over unimodal and bimodal baselines; pretrained embeddings consistently outperform handcrafted features; and our method achieves state-of-the-art results on mainstream datasets. These findings empirically validate the efficacy of cross-modal complementarity in depression recognition.

Technology Category

Application Category

📝 Abstract
Depression is a widespread mental health disorder, yet its automatic detection remains challenging. Prior work has explored unimodal and multimodal approaches, with multimodal systems showing promise by leveraging complementary signals. However, existing studies are limited in scope, lack systematic comparisons of features, and suffer from inconsistent evaluation protocols. We address these gaps by systematically exploring feature representations and modelling strategies across EEG, together with speech and text. We evaluate handcrafted features versus pre-trained embeddings, assess the effectiveness of different neural encoders, compare unimodal, bimodal, and trimodal configurations, and analyse fusion strategies with attention to the role of EEG. Consistent subject-independent splits are applied to ensure robust, reproducible benchmarking. Our results show that (i) the combination of EEG, speech and text modalities enhances multimodal detection, (ii) pretrained embeddings outperform handcrafted features, and (iii) carefully designed trimodal models achieve state-of-the-art performance. Our work lays the groundwork for future research in multimodal depression detection.
Problem

Research questions and friction points this paper is trying to address.

Systematically comparing multimodal depression detection using EEG, speech, and text
Evaluating feature representations and fusion strategies across trimodal configurations
Addressing inconsistent evaluation protocols with reproducible subject-independent benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combining EEG, speech, and text for depression detection
Using pretrained embeddings instead of handcrafted features
Applying consistent subject-independent splits for evaluation
🔎 Similar Papers
No similar papers found.
A
Annisaa Fitri Nurfidausi
DISI, University of Bologna, Italy
E
Eleonora Mancini
DISI, University of Bologna, Italy
Paolo Torroni
Paolo Torroni
University of Bologna
Artificial IntelligenceNatural Language ProcessingArgumentationMulti-Agent SystemsComputational Logic