Domain Adversarial Training for Mitigating Gender Bias in Speech-based Mental Health Detection

📅 2025-05-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Speech recognition–based mental health detection models exhibit significant gender bias, resulting in uneven diagnostic performance for depression and PTSD across gender groups. To address this, we propose a fairness-enhancing method based on Domain-Adversarial Neural Networks (DANN), which explicitly models gender as distinct domains and integrates this into a multitask learning framework atop the pretrained Wav2Vec 2.0 speech foundation model—enabling gender-invariant representation learning. Evaluated on the E-DAIC dataset, our approach achieves up to a 13.29-percentage-point improvement in F1-score while reducing inter-gender performance disparity (ΔF1) by over 60%, without compromising diagnostic accuracy. Our key contributions are: (i) the first application of domain-adversarial training to speech-based mental health assessment; (ii) a principled integration of clinical validity and algorithmic fairness; and (iii) enhanced model robustness and real-world deployability.

Technology Category

Application Category

📝 Abstract
Speech-based AI models are emerging as powerful tools for detecting depression and the presence of Post-traumatic stress disorder (PTSD), offering a non-invasive and cost-effective way to assess mental health. However, these models often struggle with gender bias, which can lead to unfair and inaccurate predictions. In this study, our study addresses this issue by introducing a domain adversarial training approach that explicitly considers gender differences in speech-based depression and PTSD detection. Specifically, we treat different genders as distinct domains and integrate this information into a pretrained speech foundation model. We then validate its effectiveness on the E-DAIC dataset to assess its impact on performance. Experimental results show that our method notably improves detection performance, increasing the F1-score by up to 13.29 percentage points compared to the baseline. This highlights the importance of addressing demographic disparities in AI-driven mental health assessment.
Problem

Research questions and friction points this paper is trying to address.

Mitigating gender bias in speech-based mental health detection
Improving depression and PTSD detection accuracy across genders
Enhancing AI fairness in mental health assessment models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Domain adversarial training for gender bias mitigation
Treats genders as distinct domains in model
Integrates gender info into pretrained speech model
🔎 Similar Papers
No similar papers found.
J
June-Woo Kim
RSC LAB, MODULABS, Republic of Korea; Department of Psychiatry, Wonkwang University Hospital, Republic of Korea
H
Haram Yoon
Department of Psychiatry, School of Medicine, Wonkwang University, Republic of Korea
W
Wonkyo Oh
Department of Psychiatry, School of Medicine, Wonkwang University, Republic of Korea
D
Dawoon Jung
Department of Psychiatry, School of Medicine, Wonkwang University, Republic of Korea
Sung-Hoon Yoon
Sung-Hoon Yoon
Postdoctoral fellow @ Harvard Medical, Ph.D/MS/BS @ KAIST
Multi-modal Visual PerceptionMedical AIComputer VisionLabel Efficient Learning
Dae-Jin Kim
Dae-Jin Kim
Department of Psychiatry, Wonkwang University Hospital, Republic of Korea
D
Dong-Ho Lee
Department of Psychiatry, School of Medicine, Wonkwang University, Republic of Korea
S
Sang-Yeol Lee
Department of Psychiatry, Wonkwang University Hospital, Republic of Korea; Department of Psychiatry, School of Medicine, Wonkwang University, Republic of Korea
C
Chan-Mo Yang
Department of Psychiatry, Wonkwang University Hospital, Republic of Korea; Department of Psychiatry, School of Medicine, Wonkwang University, Republic of Korea