Leveraging Large Language Models to Measure Gender Representation Bias in Gendered Language Corpora

📅 2024-06-19
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses gender representation bias in LLM training data—specifically, the imbalance in male versus female mentions—by introducing the first context-aware quantification method tailored to inflectional languages (e.g., Spanish). Unlike English-centric stereotype assessments, our approach leverages LLMs’ semantic understanding via prompt engineering and zero-/few-shot classification to automatically identify and disambiguate gendered nouns and pronouns referring to humans, enabling fine-grained, annotation-free, cross-linguistically comparable bias measurement. Experiments on four Spanish-language benchmark datasets (ES-News, CORA, etc.) reveal pronounced male dominance, with male-to-female mention ratios ranging from 4:1 to 6:1. Crucially, the method overcomes technical challenges posed by grammatical gender, delivering a reproducible, upstream diagnostic tool for fairness-aware NLP.

Technology Category

Application Category

📝 Abstract
Gender bias in text corpora that are used for a variety of natural language processing (NLP) tasks, such as for training large language models (LLMs), can lead to the perpetuation and amplification of societal inequalities. This phenomenon is particularly pronounced in gendered languages like Spanish or French, where grammatical structures inherently encode gender, making the bias analysis more challenging. A first step in quantifying gender bias in text entails computing biases in gender representation, i.e., differences in the prevalence of words referring to males vs. females. Existing methods to measure gender representation bias in text corpora have mainly been proposed for English and do not generalize to gendered languages due to the intrinsic linguistic differences between English and gendered languages. This paper introduces a novel methodology that leverages the contextual understanding capabilities of LLMs to quantitatively measure gender representation bias in Spanish corpora. By utilizing LLMs to identify and classify gendered nouns and pronouns in relation to their reference to human entities, our approach provides a robust analysis of gender representation bias in gendered languages. We empirically validate our method on four widely-used benchmark datasets, uncovering significant gender prevalence disparities with a male-to-female ratio ranging from 4:1 to 6:1. These findings demonstrate the value of our methodology for bias quantification in gendered language corpora and suggest its application in NLP, contributing to the development of more equitable language technologies.
Problem

Research questions and friction points this paper is trying to address.

Measure gender representation bias in gendered language corpora
Detect and quantify bias in LLM training data for gendered languages
Analyze corpus-level gender bias impacts on multilingual NLP models
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based gender bias detection in corpora
Contextual understanding for word classification
Mitigating bias via opposite-gender training
🔎 Similar Papers
No similar papers found.
Erik Derner
Erik Derner
CIIRC CTU in Prague
Generative AITrustworthy AIHuman-centric AIAI SafetyAI Security
S
Sara Sansalvador de la Fuente
ELLIS Alicante, Alicante, Spain
Y
Yoan Gutiérrez
University of Alicante, Alicante, Spain
Paloma Moreda
Paloma Moreda
Universidad de Alicante
procesamiento del lenguaje naturaltecnologias del lenguaje humano
N
Nuria Oliver
ELLIS Alicante, Alicante, Spain