Robust Harmful Meme Detection under Missing Modalities via Shared Representation Learning

πŸ“… 2026-02-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the significant performance degradation of existing harmful meme detection methods when textual modality is missingβ€”such as when OCR fails. It presents the first systematic study of harmful meme detection under modality-missing conditions and proposes a novel approach based on multimodal shared representation learning. By independently projecting image and text into a unified semantic space, the method enables robust inference even in the absence of textual input through a shared, resilient representation. This strategy substantially reduces reliance on the text modality, demonstrating superior performance on two benchmark datasets. Notably, it outperforms current state-of-the-art methods under text-missing scenarios, enhancing both the utilization efficiency of visual features and the overall model robustness.

Technology Category

Application Category

πŸ“ Abstract
Internet memes are powerful tools for communication, capable of spreading political, psychological, and sociocultural ideas. However, they can be harmful and can be used to disseminate hate toward targeted individuals or groups. Although previous studies have focused on designing new detection methods, these often rely on modal-complete data, such as text and images. In real-world settings, however, modalities like text may be missing due to issues like poor OCR quality, making existing methods sensitive to missing information and leading to performance deterioration. To address this gap, in this paper, we present the first-of-its-kind work to comprehensively investigate the behavior of harmful meme detection methods in the presence of modal-incomplete data. Specifically, we propose a new baseline method that learns a shared representation for multiple modalities by projecting them independently. These shared representations can then be leveraged when data is modal-incomplete. Experimental results on two benchmark datasets demonstrate that our method outperforms existing approaches when text is missing. Moreover, these results suggest that our method allows for better integration of visual features, reducing dependence on text and improving robustness in scenarios where textual information is missing. Our work represents a significant step forward in enabling the real-world application of harmful meme detection, particularly in situations where a modality is absent.
Problem

Research questions and friction points this paper is trying to address.

harmful meme detection
missing modalities
multimodal learning
robustness
shared representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

missing modalities
shared representation learning
harmful meme detection
multimodal robustness
modality-incomplete data
πŸ”Ž Similar Papers
No similar papers found.