🤖 AI Summary
Existing multimodal hate speech datasets often suffer from coarse-grained labels and a lack of contextual information, which limits detection performance. To address this, this work proposes the first framework based on a seven-agent collaborative annotation mechanism to construct M³—a fine-grained, cross-platform, multilingual multimodal hate speech benchmark comprising 2,455 samples. Each sample is accompanied by human-verified rationales, hierarchical labels, and authentic social context. Leveraging this dataset, we conduct a systematic evaluation of state-of-the-art multimodal large language models (MLLMs) and find that they struggle to effectively leverage contextual cues, sometimes even exhibiting performance degradation. This reveals a significant gap in the current models’ ability to reason within real-world contextual settings.
📝 Abstract
Hate speech online targets individuals or groups based on identity attributes and spreads rapidly, posing serious social risks. Memes, which combine images and text, have emerged as a nuanced vehicle for disseminating hate speech, often relying on cultural knowledge for interpretation. However, existing multimodal hate speech datasets suffer from coarse-grained labeling and a lack of integration with surrounding discourse, leading to imprecise and incomplete assessments. To bridge this gap, we propose an agentic annotation framework that coordinates seven specialized agents to generate hierarchical labels and rationales. Based on this framework, we construct M^3 (Multi-platform, Multi-lingual, and Multimodal Meme), a dataset of 2,455 memes collected from X, 4chan, and Weibo, featuring fine-grained hate labels and human-verified rationales. Benchmarking state-of-the-art Multimodal Large Language Models reveals that these models struggle to effectively utilize surrounding post context, which often fails to improve or even degrades detection performance. Our finding highlights the challenges these models face in reasoning over memes embedded in real-world discourse and underscores the need for a context-aware multimodal architecture. Our dataset and code are available at https://github.com/mira-ai-lab/M3.