Human-Centered Multimodal Fusion for Sexism Detection in Memes with Eye-Tracking, Heart Rate, and EEG Signals

📅 2026-02-27
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
Detecting gender bias in online memes presents significant challenges due to multimodal ambiguity, cultural context, and the confounding effects of humor, which conventional content-based models struggle to capture with the nuance of human perception. To address this, this work proposes a human-centered multimodal fusion framework that integrates physiological signals—namely eye-tracking, heart rate, and electroencephalography—as objective indicators of human perception, alongside visual-linguistic features extracted from state-of-the-art vision-language models. Evaluated on the EXIST 2025 dataset, the proposed approach achieves an AUC of 0.794, representing a 3.4% improvement over strong baselines. Notably, it yields a substantial 26.3% gain in F1-score on the most challenging fine-grained category—“misogyny without explicit sexual violence”—demonstrating enhanced capability in identifying subtle forms of gender-based discrimination.

Technology Category

Application Category

📝 Abstract
The automated detection of sexism in memes is a challenging task due to multimodal ambiguity, cultural nuance, and the use of humor to provide plausible deniability. Content-only models often fail to capture the complexity of human perception. To address this limitation, we introduce and validate a human-centered paradigm that augments standard content features with physiological data. We created a novel resource by recording Eye-Tracking (ET), Heart Rate (HR), and Electroencephalography (EEG) from 16 subjects (8 per experiment) while they viewed 3984 memes from the EXIST 2025 dataset. Our statistical analysis reveals significant physiological differences in how subjects process sexist versus non-sexist content. Sexist memes were associated with higher cognitive load, reflected in increased fixation counts and longer reaction times, as well as differences in EEG spectral power across the Alpha, Beta, and Gamma bands, suggesting more demanding neural processing. Building on these findings, we propose a multimodal fusion model that integrates physiological signals with enriched textual-visual features derived from a Vision-Language Model (VLM). Our final model achieves an AUC of 0.794 in binary sexism detection, a statistically significant 3.4% improvement over a strong VLM-based baseline. The fusion is particularly effective for nuanced cases, boosting the F1-score for the most challenging fine-grained category, Misogyny and Non-Sexual Violence, by 26.3%. These results show that physiological responses provide an objective signal of perception that enhances the accuracy and human-awareness of automated systems for countering online sexism.
Problem

Research questions and friction points this paper is trying to address.

sexism detection
memes
multimodal ambiguity
human perception
online content moderation
Innovation

Methods, ideas, or system contributions that make the work stand out.

physiological signals
multimodal fusion
sexism detection
vision-language model
human-centered AI
🔎 Similar Papers
No similar papers found.
I
IvĂĄn Arcos
PRHLT Research Center, Universitat Politècnica de València (UPV), 46022 Valencia, Spain
Paolo Rosso
Paolo Rosso
Full Professor, Computer Science, Universitat Politècnica de València
Natural Language ProcessingFake News detectionHate Speech detectionIrony detectionArtificial Intelligence
E
Elena Gomis-Vicent
PRHLT Research Center, Universitat Politècnica de València (UPV), 46022 Valencia, Spain