🤖 AI Summary
This work addresses gender representation bias in LLM training data—specifically, the imbalance in male versus female mentions—by introducing the first context-aware quantification method tailored to inflectional languages (e.g., Spanish). Unlike English-centric stereotype assessments, our approach leverages LLMs’ semantic understanding via prompt engineering and zero-/few-shot classification to automatically identify and disambiguate gendered nouns and pronouns referring to humans, enabling fine-grained, annotation-free, cross-linguistically comparable bias measurement. Experiments on four Spanish-language benchmark datasets (ES-News, CORA, etc.) reveal pronounced male dominance, with male-to-female mention ratios ranging from 4:1 to 6:1. Crucially, the method overcomes technical challenges posed by grammatical gender, delivering a reproducible, upstream diagnostic tool for fairness-aware NLP.
📝 Abstract
Gender bias in text corpora that are used for a variety of natural language processing (NLP) tasks, such as for training large language models (LLMs), can lead to the perpetuation and amplification of societal inequalities. This phenomenon is particularly pronounced in gendered languages like Spanish or French, where grammatical structures inherently encode gender, making the bias analysis more challenging. A first step in quantifying gender bias in text entails computing biases in gender representation, i.e., differences in the prevalence of words referring to males vs. females. Existing methods to measure gender representation bias in text corpora have mainly been proposed for English and do not generalize to gendered languages due to the intrinsic linguistic differences between English and gendered languages. This paper introduces a novel methodology that leverages the contextual understanding capabilities of LLMs to quantitatively measure gender representation bias in Spanish corpora. By utilizing LLMs to identify and classify gendered nouns and pronouns in relation to their reference to human entities, our approach provides a robust analysis of gender representation bias in gendered languages. We empirically validate our method on four widely-used benchmark datasets, uncovering significant gender prevalence disparities with a male-to-female ratio ranging from 4:1 to 6:1. These findings demonstrate the value of our methodology for bias quantification in gendered language corpora and suggest its application in NLP, contributing to the development of more equitable language technologies.