Detecting and Understanding Hateful Contents in Memes Through Captioning and Visual Question-Answering

📅 2025-04-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge of detecting covert, modality-coupled hate speech in memes—where hateful intent is implicitly encoded across text and image—we propose a multimodal collaborative reasoning framework. Our method integrates OCR-based text extraction, neutral image captioning (via BLIP-2/LLaVA), ViT-BERT joint visual–linguistic encoding, RAG-enhanced contextual retrieval, and a novel “neutral captioning + iterative VQA” symbolic reasoning mechanism for attributing implicit hate intent. Furthermore, we introduce a hierarchical sub-label classification scheme coupled with a RAG-driven, context-aware paradigm to overcome unimodal blind spots. Evaluated on the Facebook Hateful Memes dataset, our approach surpasses state-of-the-art unimodal and multimodal models in both accuracy and AUC-ROC; notably, it achieves a 12.6% improvement in F1-score for covert hate detection, significantly advancing fine-grained semantic understanding of multimodal hate speech.

Technology Category

Application Category

📝 Abstract
Memes are widely used for humor and cultural commentary, but they are increasingly exploited to spread hateful content. Due to their multimodal nature, hateful memes often evade traditional text-only or image-only detection systems, particularly when they employ subtle or coded references. To address these challenges, we propose a multimodal hate detection framework that integrates key components: OCR to extract embedded text, captioning to describe visual content neutrally, sub-label classification for granular categorization of hateful content, RAG for contextually relevant retrieval, and VQA for iterative analysis of symbolic and contextual cues. This enables the framework to uncover latent signals that simpler pipelines fail to detect. Experimental results on the Facebook Hateful Memes dataset reveal that the proposed framework exceeds the performance of unimodal and conventional multimodal models in both accuracy and AUC-ROC.
Problem

Research questions and friction points this paper is trying to address.

Detecting hateful content in multimodal memes
Overcoming limitations of text-only or image-only detection
Improving accuracy in identifying subtle hateful references
Innovation

Methods, ideas, or system contributions that make the work stand out.

OCR extracts embedded text from memes
Captioning neutrally describes visual content
VQA analyzes symbolic and contextual cues
🔎 Similar Papers
No similar papers found.
A
Ali Anaissi
The University of Sydney, School of Computer Science, Camperdown, NSW 2008, Australia; University of Technology Sydney, TD School, Ultimo, Australia
J
Junaid Akram
The University of Sydney, School of Computer Science, Camperdown, NSW 2008, Australia; University of Technology Sydney, TD School, Ultimo, Australia; Australian Catholic University, Peter Faber Business School, North Sydney, NSW 2060 Australia
K
Kunal Chaturvedi
University of Technology Sydney, School of Computer Science, Ultimo, Australia
Ali Braytee
Ali Braytee
University of Technology Sydney
machine learningoptimizationdata miningcomputational biology