Is AI Ready for Multimodal Hate Speech Detection? A Comprehensive Dataset and Benchmark Evaluation

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal hate speech datasets often suffer from coarse-grained labels and a lack of contextual information, which limits detection performance. To address this, this work proposes the first framework based on a seven-agent collaborative annotation mechanism to construct M³—a fine-grained, cross-platform, multilingual multimodal hate speech benchmark comprising 2,455 samples. Each sample is accompanied by human-verified rationales, hierarchical labels, and authentic social context. Leveraging this dataset, we conduct a systematic evaluation of state-of-the-art multimodal large language models (MLLMs) and find that they struggle to effectively leverage contextual cues, sometimes even exhibiting performance degradation. This reveals a significant gap in the current models’ ability to reason within real-world contextual settings.

Technology Category

Application Category

📝 Abstract
Hate speech online targets individuals or groups based on identity attributes and spreads rapidly, posing serious social risks. Memes, which combine images and text, have emerged as a nuanced vehicle for disseminating hate speech, often relying on cultural knowledge for interpretation. However, existing multimodal hate speech datasets suffer from coarse-grained labeling and a lack of integration with surrounding discourse, leading to imprecise and incomplete assessments. To bridge this gap, we propose an agentic annotation framework that coordinates seven specialized agents to generate hierarchical labels and rationales. Based on this framework, we construct M^3 (Multi-platform, Multi-lingual, and Multimodal Meme), a dataset of 2,455 memes collected from X, 4chan, and Weibo, featuring fine-grained hate labels and human-verified rationales. Benchmarking state-of-the-art Multimodal Large Language Models reveals that these models struggle to effectively utilize surrounding post context, which often fails to improve or even degrades detection performance. Our finding highlights the challenges these models face in reasoning over memes embedded in real-world discourse and underscores the need for a context-aware multimodal architecture. Our dataset and code are available at https://github.com/mira-ai-lab/M3.
Problem

Research questions and friction points this paper is trying to address.

multimodal hate speech
meme
context integration
fine-grained labeling
dataset limitation
Innovation

Methods, ideas, or system contributions that make the work stand out.

agentic annotation framework
fine-grained multimodal hate speech
context-aware multimodal architecture
M3 dataset
multimodal meme benchmarking
🔎 Similar Papers
No similar papers found.
Rui Xing
Rui Xing
University of Melbourne
Natural Language ProcessingArtificial IntelligenceDeep Learning
Q
Qi Chai
The Hong Kong University of Science and Technology (Guangzhou)
J
Jie Ma
MOE KLINNS Lab, Xi’an Jiaotong University; School of Cyber Science and Engineering, Xi’an Jiaotong University
J
Jing Tao
MOE KLINNS Lab, Xi’an Jiaotong University
Pinghui Wang
Pinghui Wang
Xi'an Jiaotong University
S
Shuming Zhang
Northwest University
X
Xinping Wang
School of Cyber Science and Engineering, Xi’an Jiaotong University
Hao Wang
Hao Wang
The Hong Kong University of Science and Technology
Machine LearningData Mining