Explaining Matters: Leveraging Definitions and Semantic Expansion for Sexism Detection

📅 2025-06-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Online gender bias detection faces three key challenges: data sparsity, severe class imbalance, and inter-annotator disagreement in fine-grained labeling—hindering model generalization and discriminative reliability. To address these, we propose Definition-Driven Data Augmentation (DDA) and Contextual Semantic Expansion (CSE), integrating large language model prompting, semantic alignment-based synthetic data generation, and task-aware feature enhancement to construct robust, fine-grained representations. We further design a multi-model complementary ensemble to mitigate prediction tie issues. Evaluated on the EDOS benchmark, our approach achieves new state-of-the-art performance: +1.5 points in binary macro-F1 and +4.1 points in fine-grained macro-F1. Notably, this work is the first to explicitly incorporate interpretable, definition-based knowledge into both data augmentation and semantic modeling, significantly improving detection of implicit, context-dependent gender bias.

Technology Category

Application Category

📝 Abstract
The detection of sexism in online content remains an open problem, as harmful language disproportionately affects women and marginalized groups. While automated systems for sexism detection have been developed, they still face two key challenges: data sparsity and the nuanced nature of sexist language. Even in large, well-curated datasets like the Explainable Detection of Online Sexism (EDOS), severe class imbalance hinders model generalization. Additionally, the overlapping and ambiguous boundaries of fine-grained categories introduce substantial annotator disagreement, reflecting the difficulty of interpreting nuanced expressions of sexism. To address these challenges, we propose two prompt-based data augmentation techniques: Definition-based Data Augmentation (DDA), which leverages category-specific definitions to generate semantically-aligned synthetic examples, and Contextual Semantic Expansion (CSE), which targets systematic model errors by enriching examples with task-specific semantic features. To further improve reliability in fine-grained classification, we introduce an ensemble strategy that resolves prediction ties by aggregating complementary perspectives from multiple language models. Our experimental evaluation on the EDOS dataset demonstrates state-of-the-art performance across all tasks, with notable improvements of macro F1 by 1.5 points for binary classification (Task A) and 4.1 points for fine-grained classification (Task C).
Problem

Research questions and friction points this paper is trying to address.

Detecting nuanced sexist language in online content
Addressing data sparsity and class imbalance issues
Resolving annotator disagreement in fine-grained categories
Innovation

Methods, ideas, or system contributions that make the work stand out.

Definition-based Data Augmentation for semantic alignment
Contextual Semantic Expansion targeting model errors
Ensemble strategy resolving prediction ties
🔎 Similar Papers
No similar papers found.
S
Sahrish Khan
Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK
A
Arshad Jhumka
School of Computing, University of Leeds, Leeds LS2 9JT, UK
Gabriele Pergola
Gabriele Pergola
Assistant Professor, University of Warwick
Natural Language ProcessingSentiment AnalysisQuestion AnsweringMachine Learning