π€ AI Summary
This work addresses the fragmentation of existing meme analysis tasks and languages, which hinders cross-domain generalization due to the absence of a unified framework. To overcome this limitation, we propose MemeLensβthe first unified vision-language model supporting multilingual and multitask meme understanding. MemeLens integrates 38 public datasets mapped onto 20 shared tasks spanning harm detection, target identification, rhetorical intent, and sentiment analysis. By constructing a unified label schema, incorporating an explanation-augmented mechanism, and performing large-scale multitask training on multimodal data, MemeLens substantially outperforms single-task fine-tuning baselines. The model demonstrates superior generalization, robustness across languages, and enhanced interpretability in capturing multimodal semantic interactions inherent in internet memes.
π Abstract
Memes are a dominant medium for online communication and manipulation because meaning emerges from interactions between embedded text, imagery, and cultural context. Existing meme research is distributed across tasks (hate, misogyny, propaganda, sentiment, humour) and languages, which limits cross-domain generalization. To address this gap we propose MemeLens, a unified multilingual and multitask explanation-enhanced Vision Language Model (VLM) for meme understanding. We consolidate 38 public meme datasets, filter and map dataset-specific labels into a shared taxonomy of $20$ tasks spanning harm, targets, figurative/pragmatic intent, and affect. We present a comprehensive empirical analysis across modeling paradigms, task categories, and datasets. Our findings suggest that robust meme understanding requires multimodal training, exhibits substantial variation across semantic categories, and remains sensitive to over-specialization when models are fine-tuned on individual datasets rather than trained in a unified setting. We will make the experimental resources and datasets publicly available for the community.