Exploring the Limits of Zero Shot Vision Language Models for Hate Meme Detection: The Vulnerabilities and their Interpretations

📅 2024-02-19
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
To address the growing prevalence of hateful memes on social media, this work investigates zero-shot detection capabilities of vision-language models (VLMs), circumventing the bottleneck of scarce annotated data. We systematically evaluate mainstream VLMs—including CLIP, Flamingo, and LLaVA—via multimodal prompt engineering and employ superpixel occlusion-based interpretability analysis to diagnose misclassification mechanisms. Our key contribution is the first taxonomy of misclassification patterns for zero-shot hateful meme detection, identifying six canonical error types that expose critical robustness deficiencies in VLMs under semantic incongruence, metaphorical abuse, and culturally biased contexts. This taxonomy provides an interpretable, attribution-aware framework for safety alignment and establishes an empirical foundation for designing next-generation content safety guardrails. (149 words)

Technology Category

Application Category

📝 Abstract
There is a rapid increase in the use of multimedia content in current social media platforms. One of the highly popular forms of such multimedia content are memes. While memes have been primarily invented to promote funny and buoyant discussions, malevolent users exploit memes to target individuals or vulnerable communities, making it imperative to identify and address such instances of hateful memes. Thus social media platforms are in dire need for active moderation of such harmful content. While manual moderation is extremely difficult due to the scale of such content, automatic moderation is challenged by the need of good quality annotated data to train hate meme detection algorithms. This makes a perfect pretext for exploring the power of modern day vision language models (VLMs) that have exhibited outstanding performance across various tasks. In this paper we study the effectiveness of VLMs in handling intricate tasks such as hate meme detection in a completely zero-shot setting so that there is no dependency on annotated data for the task. We perform thorough prompt engineering and query state-of-the-art VLMs using various prompt types to detect hateful/harmful memes. We further interpret the misclassification cases using a novel superpixel based occlusion method. Finally we show that these misclassifications can be neatly arranged into a typology of error classes the knowledge of which should enable the design of better safety guardrails in future.
Problem

Research questions and friction points this paper is trying to address.

Assessing zero-shot VLMs for hate meme detection
Identifying vulnerabilities in VLM-based moderation
Developing typology for error classification in detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Zero-shot vision language models for hate meme detection
Thorough prompt engineering with various prompt types
Superpixel based occlusion method for misclassification interpretation
🔎 Similar Papers
No similar papers found.
N
Naquee Rizwan
Indian Institute of Technology, Kharagpur
P
Paramananda Bhaskar
Indian Institute of Technology, Kharagpur
Mithun Das
Mithun Das
Indian Institute of Technology, Kharagpur
S
Swadhin Satyaprakash Majhi
Indian Institute of Technology, Kharagpur
P
Punyajoy Saha
Indian Institute of Technology, Kharagpur
Animesh Mukherjee
Animesh Mukherjee
Professor of Computer Science, IIT Kharagpur, FNAE, Distinguished Member, ACM
Language dynamicsComplex systems and networksweb social media