Detecting mental disorder on social media: a ChatGPT-augmented explainable approach

📅 2024-01-30
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the need for timely and interpretable detection of depressive symptoms in social media, this paper proposes BERT-XDD: a self-explanatory BERT variant leveraging masked attention mechanisms. It employs BERTweet for domain-adapted feature encoding and integrates ChatGPT to transform attention-based attributions into clinically accessible natural language explanations. The framework jointly outputs classification predictions and fine-grained, token-level explanations while preserving end-to-end trainability. Evaluated on multiple Twitter-based depression detection benchmarks, BERT-XDD achieves state-of-the-art performance. Its attention masks are empirically verifiable, enabling mental health practitioners to rapidly identify risk indicators and initiate early interventions. This work is the first to synergistically integrate a self-explanatory architecture, domain-specific pretraining (BERTweet), and large-language-model–based explanation post-processing—significantly enhancing model transparency, trustworthiness, and clinical utility in mental health screening.

Technology Category

Application Category

📝 Abstract
In the digital era, the prevalence of depressive symptoms expressed on social media has raised serious concerns, necessitating advanced methodologies for timely detection. This paper addresses the challenge of interpretable depression detection by proposing a novel methodology that effectively combines Large Language Models (LLMs) with eXplainable Artificial Intelligence (XAI) and conversational agents like ChatGPT. In our methodology, explanations are achieved by integrating BERTweet, a Twitter-specific variant of BERT, into a novel self-explanatory model, namely BERT-XDD, capable of providing both classification and explanations via masked attention. The interpretability is further enhanced using ChatGPT to transform technical explanations into human-readable commentaries. By introducing an effective and modular approach for interpretable depression detection, our methodology can contribute to the development of socially responsible digital platforms, fostering early intervention and support for mental health challenges under the guidance of qualified healthcare professionals.
Problem

Research questions and friction points this paper is trying to address.

Detecting mental disorders on social media using advanced AI.
Combining LLMs and XAI for interpretable depression detection.
Enhancing interpretability with ChatGPT for human-readable explanations.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines LLMs with XAI for depression detection
Integrates BERTweet into self-explanatory BERT-XDD model
Uses ChatGPT to enhance interpretability with human-readable explanations
🔎 Similar Papers
No similar papers found.