🤖 AI Summary
Online gender bias detection faces three key challenges: data sparsity, severe class imbalance, and inter-annotator disagreement in fine-grained labeling—hindering model generalization and discriminative reliability. To address these, we propose Definition-Driven Data Augmentation (DDA) and Contextual Semantic Expansion (CSE), integrating large language model prompting, semantic alignment-based synthetic data generation, and task-aware feature enhancement to construct robust, fine-grained representations. We further design a multi-model complementary ensemble to mitigate prediction tie issues. Evaluated on the EDOS benchmark, our approach achieves new state-of-the-art performance: +1.5 points in binary macro-F1 and +4.1 points in fine-grained macro-F1. Notably, this work is the first to explicitly incorporate interpretable, definition-based knowledge into both data augmentation and semantic modeling, significantly improving detection of implicit, context-dependent gender bias.
📝 Abstract
The detection of sexism in online content remains an open problem, as harmful language disproportionately affects women and marginalized groups. While automated systems for sexism detection have been developed, they still face two key challenges: data sparsity and the nuanced nature of sexist language. Even in large, well-curated datasets like the Explainable Detection of Online Sexism (EDOS), severe class imbalance hinders model generalization. Additionally, the overlapping and ambiguous boundaries of fine-grained categories introduce substantial annotator disagreement, reflecting the difficulty of interpreting nuanced expressions of sexism. To address these challenges, we propose two prompt-based data augmentation techniques: Definition-based Data Augmentation (DDA), which leverages category-specific definitions to generate semantically-aligned synthetic examples, and Contextual Semantic Expansion (CSE), which targets systematic model errors by enriching examples with task-specific semantic features. To further improve reliability in fine-grained classification, we introduce an ensemble strategy that resolves prediction ties by aggregating complementary perspectives from multiple language models. Our experimental evaluation on the EDOS dataset demonstrates state-of-the-art performance across all tasks, with notable improvements of macro F1 by 1.5 points for binary classification (Task A) and 4.1 points for fine-grained classification (Task C).