Mitigating Gender Bias in Depression Detection via Counterfactual Inference

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Audio-based depression detection models suffer from gender bias due to epidemiological gender imbalances in training data, leading the model to learn spurious correlations between gender and depression—resulting in over-diagnosis of females and under-diagnosis of males. To address this fairness issue, we propose a causal-graph-based counterfactual debiasing framework that explicitly models and blocks the direct causal path from gender to prediction outcomes, thereby eliminating gender bias at the mechanistic level. Our method integrates counterfactual inference with state-of-the-art acoustic modeling and performs end-to-end debiased optimization on the DAIC-WOZ dataset. Experiments demonstrate that our approach significantly reduces gender disparity—exceeding 40% reduction in both equalized odds (ΔEO) and demographic parity (ΔDP) gaps—while simultaneously improving overall detection accuracy (AUC increases by 2.3%). It outperforms existing debiasing baselines, offering an interpretable and verifiable causal fairness solution for trustworthy medical AI.

Technology Category

Application Category

📝 Abstract
Audio-based depression detection models have demonstrated promising performance but often suffer from gender bias due to imbalanced training data. Epidemiological statistics show a higher prevalence of depression in females, leading models to learn spurious correlations between gender and depression. Consequently, models tend to over-diagnose female patients while underperforming on male patients, raising significant fairness concerns. To address this, we propose a novel Counterfactual Debiasing Framework grounded in causal inference. We construct a causal graph to model the decision-making process and identify gender bias as the direct causal effect of gender on the prediction. During inference, we employ counterfactual inference to estimate and subtract this direct effect, ensuring the model relies primarily on authentic acoustic pathological features. Extensive experiments on the DAIC-WOZ dataset using two advanced acoustic backbones demonstrate that our framework not only significantly reduces gender bias but also improves overall detection performance compared to existing debiasing strategies.
Problem

Research questions and friction points this paper is trying to address.

Mitigates gender bias in audio-based depression detection models.
Addresses over-diagnosis in females and underperformance in males.
Uses counterfactual inference to remove gender's direct causal effect.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Counterfactual inference to mitigate gender bias
Causal graph modeling for decision-making process
Subtracting direct gender effect from predictions
🔎 Similar Papers
No similar papers found.
Mingxuan Hu
Mingxuan Hu
Xi'an Jiaotong-Liverpool University
Computer Science
Z
Ziqi Liu
School of Advanced Technology, Xi’an Jiaotong-Liverpool University, Suzhou, China
Hongbo Ma
Hongbo Ma
Assistant Professor of Civil & Environmental Engineering, University of Illinois at Urbana-Champaign
GeomorphologyFluid MechanicsSedimentologyHydraulic Engineering
J
Jiaqi Liu
School of Advanced Technology, Xi’an Jiaotong-Liverpool University, Suzhou, China
X
Xinlan Wu
School of Advanced Technology, Xi’an Jiaotong-Liverpool University, Suzhou, China
Yangbin Chen
Yangbin Chen
Xi'an Jiaotong-Liverpool University; CUHK; CityUHK;USTC
machine intelligencespoken language processingaffective computing