🤖 AI Summary
Addressing the challenge of detecting covert, modality-coupled hate speech in memes—where hateful intent is implicitly encoded across text and image—we propose a multimodal collaborative reasoning framework. Our method integrates OCR-based text extraction, neutral image captioning (via BLIP-2/LLaVA), ViT-BERT joint visual–linguistic encoding, RAG-enhanced contextual retrieval, and a novel “neutral captioning + iterative VQA” symbolic reasoning mechanism for attributing implicit hate intent. Furthermore, we introduce a hierarchical sub-label classification scheme coupled with a RAG-driven, context-aware paradigm to overcome unimodal blind spots. Evaluated on the Facebook Hateful Memes dataset, our approach surpasses state-of-the-art unimodal and multimodal models in both accuracy and AUC-ROC; notably, it achieves a 12.6% improvement in F1-score for covert hate detection, significantly advancing fine-grained semantic understanding of multimodal hate speech.
📝 Abstract
Memes are widely used for humor and cultural commentary, but they are increasingly exploited to spread hateful content. Due to their multimodal nature, hateful memes often evade traditional text-only or image-only detection systems, particularly when they employ subtle or coded references. To address these challenges, we propose a multimodal hate detection framework that integrates key components: OCR to extract embedded text, captioning to describe visual content neutrally, sub-label classification for granular categorization of hateful content, RAG for contextually relevant retrieval, and VQA for iterative analysis of symbolic and contextual cues. This enables the framework to uncover latent signals that simpler pipelines fail to detect. Experimental results on the Facebook Hateful Memes dataset reveal that the proposed framework exceeds the performance of unimodal and conventional multimodal models in both accuracy and AUC-ROC.