🤖 AI Summary
This study addresses the challenge of detecting latent minority stress—experienced by sexual and gender minority (SGM) individuals on social media—through a graph-enhanced Transformer framework. Methodologically, it integrates multiple pretrained language models (ELECTRA, BERT, RoBERTa, BART), couples them with graph neural networks to explicitly model user interactions and conversational relationships, and systematically evaluates supervised fine-tuning, zero-shot, and few-shot learning on highly imbalanced Reddit data. Results demonstrate, for the first time empirically, that explicit graph-structured encoding of social context significantly improves minority stress detection accuracy—particularly for fine-grained linguistic markers such as identity concealment, internalized stigma, and support-seeking behavior. Supervised fine-tuning consistently outperforms zero-shot and few-shot alternatives. The work advances digital mental health interventions by delivering an interpretable, deployable computational paradigm grounded in both linguistic and relational modeling.
📝 Abstract
Individuals from sexual and gender minority groups experience disproportionately high rates of poor health outcomes and mental disorders compared to their heterosexual and cisgender counterparts, largely as a consequence of minority stress as described by Meyer's (2003) model. This study presents the first comprehensive evaluation of transformer-based architectures for detecting minority stress in online discourse. We benchmark multiple transformer models including ELECTRA, BERT, RoBERTa, and BART against traditional machine learning baselines and graph-augmented variants. We further assess zero-shot and few-shot learning paradigms to assess their applicability on underrepresented datasets. Experiments are conducted on the two largest publicly available Reddit corpora for minority stress detection, comprising 12,645 and 5,789 posts, and are repeated over five random seeds to ensure robustness. Our results demonstrate that integrating graph structure consistently improves detection performance across transformer-only models and that supervised fine-tuning with relational context outperforms zero and few-shot approaches. Theoretical analysis reveals that modeling social connectivity and conversational context via graph augmentation sharpens the models' ability to identify key linguistic markers such as identity concealment, internalized stigma, and calls for support, suggesting that graph-enhanced transformers offer the most reliable foundation for digital health interventions and public health policy.