π€ AI Summary
This work addresses the significant performance degradation of existing harmful meme detection methods when textual modality is missingβsuch as when OCR fails. It presents the first systematic study of harmful meme detection under modality-missing conditions and proposes a novel approach based on multimodal shared representation learning. By independently projecting image and text into a unified semantic space, the method enables robust inference even in the absence of textual input through a shared, resilient representation. This strategy substantially reduces reliance on the text modality, demonstrating superior performance on two benchmark datasets. Notably, it outperforms current state-of-the-art methods under text-missing scenarios, enhancing both the utilization efficiency of visual features and the overall model robustness.
π Abstract
Internet memes are powerful tools for communication, capable of spreading political, psychological, and sociocultural ideas. However, they can be harmful and can be used to disseminate hate toward targeted individuals or groups. Although previous studies have focused on designing new detection methods, these often rely on modal-complete data, such as text and images. In real-world settings, however, modalities like text may be missing due to issues like poor OCR quality, making existing methods sensitive to missing information and leading to performance deterioration. To address this gap, in this paper, we present the first-of-its-kind work to comprehensively investigate the behavior of harmful meme detection methods in the presence of modal-incomplete data. Specifically, we propose a new baseline method that learns a shared representation for multiple modalities by projecting them independently. These shared representations can then be leveraged when data is modal-incomplete. Experimental results on two benchmark datasets demonstrate that our method outperforms existing approaches when text is missing. Moreover, these results suggest that our method allows for better integration of visual features, reducing dependence on text and improving robustness in scenarios where textual information is missing. Our work represents a significant step forward in enabling the real-world application of harmful meme detection, particularly in situations where a modality is absent.