🤖 AI Summary
Addressing the challenge of detecting implicit sexism in digital social networks, this paper proposes ASCEND, an adaptive supervised contrastive learning framework. To enhance discriminative representation learning, ASCEND introduces a learnable threshold mechanism that dynamically selects positive sample pairs, thereby optimizing the embedding space geometry. It further integrates word-level attention with multimodal features—including sentiment, emotion, and toxicity—to strengthen fine-grained semantic modeling. The framework jointly optimizes supervised contrastive loss and cross-entropy loss in an end-to-end manner. Evaluated on EXIST2021 and MLSC benchmarks, ASCEND achieves average macro-F1 improvements of 9.86%, 29.63%, and 32.51% over state-of-the-art baselines. These gains demonstrate its effectiveness in mitigating false positives and advancing implicit bias detection—establishing a novel paradigm for identifying subtle, context-dependent gender bias in social media discourse.
📝 Abstract
The global reach of social media has amplified the spread of hateful content, including implicit sexism, which is often overlooked by conventional detection methods. In this work, we introduce an Adaptive Supervised Contrastive lEarning framework for implicit sexism detectioN (ASCEND). A key innovation of our method is the incorporation of threshold-based contrastive learning: by computing cosine similarities between embeddings, we selectively treat only those sample pairs as positive if their similarity exceeds a learnable threshold. This mechanism refines the embedding space by robustly pulling together representations of semantically similar texts while pushing apart dissimilar ones, thus reducing false positives and negatives. The final classification is achieved by jointly optimizing a contrastive loss with a cross-entropy loss. Textual features are enhanced through a word-level attention module. Additionally, we employ sentiment, emotion, and toxicity features. Evaluations on the EXIST2021 and MLSC datasets demonstrate that ASCEND significantly outperforms existing methods, with average Macro F1 improvements of 9.86%, 29.63%, and 32.51% across multiple tasks, highlighting its efficacy in capturing the subtle cues of implicit sexist language.