đ¤ AI Summary
Detecting gender bias in online memes presents significant challenges due to multimodal ambiguity, cultural context, and the confounding effects of humor, which conventional content-based models struggle to capture with the nuance of human perception. To address this, this work proposes a human-centered multimodal fusion framework that integrates physiological signalsânamely eye-tracking, heart rate, and electroencephalographyâas objective indicators of human perception, alongside visual-linguistic features extracted from state-of-the-art vision-language models. Evaluated on the EXIST 2025 dataset, the proposed approach achieves an AUC of 0.794, representing a 3.4% improvement over strong baselines. Notably, it yields a substantial 26.3% gain in F1-score on the most challenging fine-grained categoryââmisogyny without explicit sexual violenceââdemonstrating enhanced capability in identifying subtle forms of gender-based discrimination.
đ Abstract
The automated detection of sexism in memes is a challenging task due to multimodal ambiguity, cultural nuance, and the use of humor to provide plausible deniability. Content-only models often fail to capture the complexity of human perception. To address this limitation, we introduce and validate a human-centered paradigm that augments standard content features with physiological data. We created a novel resource by recording Eye-Tracking (ET), Heart Rate (HR), and Electroencephalography (EEG) from 16 subjects (8 per experiment) while they viewed 3984 memes from the EXIST 2025 dataset. Our statistical analysis reveals significant physiological differences in how subjects process sexist versus non-sexist content. Sexist memes were associated with higher cognitive load, reflected in increased fixation counts and longer reaction times, as well as differences in EEG spectral power across the Alpha, Beta, and Gamma bands, suggesting more demanding neural processing. Building on these findings, we propose a multimodal fusion model that integrates physiological signals with enriched textual-visual features derived from a Vision-Language Model (VLM). Our final model achieves an AUC of 0.794 in binary sexism detection, a statistically significant 3.4% improvement over a strong VLM-based baseline. The fusion is particularly effective for nuanced cases, boosting the F1-score for the most challenging fine-grained category, Misogyny and Non-Sexual Violence, by 26.3%. These results show that physiological responses provide an objective signal of perception that enhances the accuracy and human-awareness of automated systems for countering online sexism.