MemeLens: Multilingual Multitask VLMs for Memes

πŸ“… 2026-01-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the fragmentation of existing meme analysis tasks and languages, which hinders cross-domain generalization due to the absence of a unified framework. To overcome this limitation, we propose MemeLensβ€”the first unified vision-language model supporting multilingual and multitask meme understanding. MemeLens integrates 38 public datasets mapped onto 20 shared tasks spanning harm detection, target identification, rhetorical intent, and sentiment analysis. By constructing a unified label schema, incorporating an explanation-augmented mechanism, and performing large-scale multitask training on multimodal data, MemeLens substantially outperforms single-task fine-tuning baselines. The model demonstrates superior generalization, robustness across languages, and enhanced interpretability in capturing multimodal semantic interactions inherent in internet memes.

Technology Category

Application Category

πŸ“ Abstract
Memes are a dominant medium for online communication and manipulation because meaning emerges from interactions between embedded text, imagery, and cultural context. Existing meme research is distributed across tasks (hate, misogyny, propaganda, sentiment, humour) and languages, which limits cross-domain generalization. To address this gap we propose MemeLens, a unified multilingual and multitask explanation-enhanced Vision Language Model (VLM) for meme understanding. We consolidate 38 public meme datasets, filter and map dataset-specific labels into a shared taxonomy of $20$ tasks spanning harm, targets, figurative/pragmatic intent, and affect. We present a comprehensive empirical analysis across modeling paradigms, task categories, and datasets. Our findings suggest that robust meme understanding requires multimodal training, exhibits substantial variation across semantic categories, and remains sensitive to over-specialization when models are fine-tuned on individual datasets rather than trained in a unified setting. We will make the experimental resources and datasets publicly available for the community.
Problem

Research questions and friction points this paper is trying to address.

memes
multilingual
multitask
vision-language models
cross-domain generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

multilingual multitask
vision-language model
meme understanding
unified taxonomy
explanation-enhanced
πŸ”Ž Similar Papers
No similar papers found.
A
Ali Ezzat Shahroor
Qatar Computing Research Institute, Qatar
M
Mohamed Bayan Kmainasi
Qatar University, Qatar
A
A. Hasnat
Blackbird.AI, USA; APA VI.AI, France
D
D. Dimitrov
Sofia University, Bulgaria
Giovanni Da San Martino
Giovanni Da San Martino
Associate Professor, Department of Mathematics, University of Padova, Italy
Machine Learning and Natural Language Processing
Preslav Nakov
Preslav Nakov
Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)
Computational LinguisticsLarge Language ModelsFact-checkingFake News
F
Firoj Alam
Qatar Computing Research Institute, Qatar