🤖 AI Summary
Machines struggle to interpret non-literal expressions—such as irony and metaphor—in mental health–related memes, hindering fine-grained detection of anxiety symptoms. Method: We propose M3H, the first multimodal framework for granular anxiety symptom identification, built upon a novel, GAD-scale–aligned anxiety meme dataset (AxiOM). M3H integrates ConceptNet–enhanced commonsense knowledge injection, clinical-adapted vision–language alignment, and metaphor-aware attentional fusion. Contribution/Results: On AxiOM, M3H achieves 4.20–4.66 percentage points improvement in weighted F1-score; cross-dataset evaluation on RESTORE confirms strong generalizability. Human-centered evaluation demonstrates significantly improved metaphor intention understanding, and ablation studies quantify individual module contributions. This work establishes the first commonsense-augmented modeling paradigm for mental health metaphor interpretation, advancing social media psychological risk detection toward deep semantic understanding.
📝 Abstract
The expression of mental health symptoms through non-traditional means, such as memes, has gained remarkable attention over the past few years, with users often highlighting their mental health struggles through figurative intricacies within memes. While humans rely on commonsense knowledge to interpret these complex expressions, current Multimodal Language Models (MLMs) struggle to capture these figurative aspects inherent in memes. To address this gap, we introduce a novel dataset, AxiOM, derived from the GAD anxiety questionnaire, which categorizes memes into six fine-grained anxiety symptoms. Next, we propose a commonsense and domain-enriched framework, M3H, to enhance MLMs' ability to interpret figurative language and commonsense knowledge. The overarching goal remains to first understand and then classify the mental health symptoms expressed in memes. We benchmark M3H against 6 competitive baselines (with 20 variations), demonstrating improvements in both quantitative and qualitative metrics, including a detailed human evaluation. We observe a clear improvement of 4.20% and 4.66% on weighted-F1 metric. To assess the generalizability, we perform extensive experiments on a public dataset, RESTORE, for depressive symptom identification, presenting an extensive ablation study that highlights the contribution of each module in both datasets. Our findings reveal limitations in existing models and the advantage of employing commonsense to enhance figurative understanding.