Machine Learning Approaches for Depression Detection on Social Media: A Systematic Review of Biases and Methodological Challenges.

📅 2024-10-21
🏛️ Journal of Behavioral Data Science
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Machine learning models for depression detection on social media—primarily Twitter—are plagued by methodological biases and flaws. Method: We systematically reviewed 47 post-2010 studies and, for the first time, applied the PROBAST tool to uniformly assess their methodological quality; complemented by bibliometric analysis, transparency auditing, and bias risk assessment. Results: We identified three critical issues: 80% employed non-probability sampling (inducing selection bias), only 23% addressed linguistic negation, and merely 27.7% reported rigorous hyperparameter optimization. Systemic shortcomings include linguistic homogeneity, inadequate handling of class imbalance, and poor reporting transparency. Based on these findings, we propose five evidence-based improvements: enhancing data diversity, standardizing preprocessing, adopting robust class-imbalance mitigation techniques, formalizing hyperparameter tuning protocols, and strengthening reporting transparency—collectively improving model reliability and cross-population generalizability.

Technology Category

Application Category

📝 Abstract
The global rise in depression necessitates innovative detection methods for early intervention. Social media provides a unique opportunity to identify depression through user-generated posts. This systematic review evaluates machine learning (ML) models for depression detection on social media, focusing on biases and methodological challenges throughout the ML lifecycle. A search of PubMed, IEEE Xplore, and Google Scholar identified 47 relevant studies published after 2010. The Prediction model Risk Of Bias ASsessment Tool (PROBAST) was utilized to assess methodological quality and risk of bias. Significant biases impacting model reliability and generalizability were found. There is a predominant reliance on Twitter (63.8%) and English-language content (over 90%), with most studies focusing on users from the United States and Europe. Non-probability sampling methods (approximately 80%) limit representativeness. Only 23% of studies explicitly addressed linguistic nuances like negations, crucial for accurate sentiment analysis. Inconsistent hyperparameter tuning was observed, with only 27.7% properly tuning models. About 17% did not adequately partition data into training, validation, and test sets, risking overfitting. While 74.5% used appropriate evaluation metrics for imbalanced data, others relied on accuracy without addressing class imbalance, potentially skewing results. Reporting transparency varied, often lacking critical methodological details. These findings highlight the need to diversify data sources, standardize preprocessing protocols, ensure consistent model development practices, address class imbalance, and enhance reporting transparency. By overcoming these challenges, future research can develop more robust and generalizable ML models for depression detection on social media, contributing to improved mental health outcomes globally.
Problem

Research questions and friction points this paper is trying to address.

Detect mental illness using social media data.
Address biases in machine learning models.
Improve methodological challenges in depression detection.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Machine learning for mental illness detection
Social media data analysis
Bias and methodology review
🔎 Similar Papers
No similar papers found.