Beyond Hate: Differentiating Uncivil and Intolerant Speech in Multimodal Content Moderation

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the common conflation in existing multimodal content moderation benchmarks that merge “incivility” (e.g., rude tone) and “intolerance” (e.g., identity-based attacks) into a single hate label, despite their conceptual distinctions. Drawing on communication theory, the authors propose a fine-grained annotation framework to disentangle these two harmful dimensions across a dataset of 2,030 memes—the first such effort in multimodal moderation. By training vision-language models, including LLaVA-1.6-Mistral-7B and Qwen2.5-VL-7B, with both coarse- and fine-grained labels via transfer and joint learning strategies, the approach significantly improves detection balance. Notably, the gap between false negative and false positive rates (FNR–FPR) for LLaVA drops from 0.74 to 0.42, enhancing both accuracy and robustness in harmful content identification.

Technology Category

Application Category

📝 Abstract
Current multimodal toxicity benchmarks typically use a single binary hatefulness label. This coarse approach conflates two fundamentally different characteristics of expression: tone and content. Drawing on communication science theory, we introduce a fine-grained annotation scheme that distinguishes two separable dimensions: incivility (rude or dismissive tone) and intolerance (content that attacks pluralism and targets groups or identities) and apply it to 2,030 memes from the Hateful Memes dataset. We evaluate different vision-language models under coarse-label training, transfer learning across label schemes and a joint learning approach that combines the coarse hatefulness label with our fine-grained annotations. Our results show that fine-grained annotations complement existing coarse labels and, when used jointly, improve overall model performance. Moreover, models trained with the fine-grained scheme exhibit more balanced moderation-relevant error profiles and are less prone to under-detection of harmful content than models trained on hatefulness labels alone (FNR-FPR, the difference between false negative and false positive rates: 0.74 to 0.42 for LLaVA-1.6-Mistral-7B; 0.54 to 0.28 for Qwen2.5-VL-7B). This work contributes to data-centric approaches in content moderation by improving the reliability and accuracy of moderation systems through enhanced data quality. Overall, combining both coarse and fine-grained labels provides a practical route to more reliable multimodal moderation.
Problem

Research questions and friction points this paper is trying to address.

hate speech
incivility
intolerance
multimodal content moderation
fine-grained annotation
Innovation

Methods, ideas, or system contributions that make the work stand out.

fine-grained annotation
multimodal content moderation
incivility vs. intolerance
vision-language models
data-centric AI
🔎 Similar Papers
No similar papers found.