Rainbow Noise: Stress-Testing Multimodal Harmful-Meme Detectors on LGBTQ Content

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Harmful memes targeting LGBTQ+ communities frequently evade existing detection models via textual obfuscation and image perturbations, exposing critical robustness deficiencies in current multimodal approaches. To address this, we introduce the first LGBTQ+-focused multimodal harmful meme robustness benchmark, incorporating four categories of textual attacks and three types of image perturbations. We further propose a lightweight Text Denoising Adapter (TDA) that significantly enhances model resilience against adversarial text variants. Systematic adversarial evaluations are conducted on MemeCLIP and MemeBLIP2. Ablation studies identify architecture design and data composition as key determinants of robustness. Results show that MemeCLIP exhibits relatively stable robustness, whereas MemeBLIP2 integrated with TDA achieves superior performance—demonstrating that lightweight, modular enhancements can effectively strengthen the adversarial resilience of multimodal safety systems.

Technology Category

Application Category

📝 Abstract
Hateful memes aimed at LGBTQ,+ communities often evade detection by tweaking either the caption, the image, or both. We build the first robustness benchmark for this setting, pairing four realistic caption attacks with three canonical image corruptions and testing all combinations on the PrideMM dataset. Two state-of-the-art detectors, MemeCLIP and MemeBLIP2, serve as case studies, and we introduce a lightweight extbf{Text Denoising Adapter (TDA)} to enhance the latter's resilience. Across the grid, MemeCLIP degrades more gently, while MemeBLIP2 is particularly sensitive to the caption edits that disrupt its language processing. However, the addition of the TDA not only remedies this weakness but makes MemeBLIP2 the most robust model overall. Ablations reveal that all systems lean heavily on text, but architectural choices and pre-training data significantly impact robustness. Our benchmark exposes where current multimodal safety models crack and demonstrates that targeted, lightweight modules like the TDA offer a powerful path towards stronger defences.
Problem

Research questions and friction points this paper is trying to address.

Detecting hateful memes targeting LGBTQ+ communities effectively
Testing robustness of multimodal detectors against text and image attacks
Improving resilience of detectors with lightweight text denoising adapters
Innovation

Methods, ideas, or system contributions that make the work stand out.

First robustness benchmark for hateful meme detection
Lightweight Text Denoising Adapter enhances resilience
Combines caption attacks with image corruptions testing
🔎 Similar Papers
No similar papers found.
R
Ran Tong
Mathematics and Statistics Department, University of Texas at Dallas
Songtao Wei
Songtao Wei
Ph.D. student at University of Texas at Dallas
Machine LearningDeep LearningLarge Language Models
J
Jiaqi Liu
Independent Researcher
L
Lanruo Wang
Naveen Jindal School of Management, University of Texas at Dallas, Richardson, TX 75080