FairTranslate: An English-French Dataset for Gender Bias Evaluation in Machine Translation by Overcoming Gender Binarity

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses gender bias in English-to-French machine translation (MT) by large language models (LLMs), specifically regarding non-binary gender expressions (e.g., singular “they”). We construct the first manually curated, fine-grained evaluation dataset for this task—comprising 2,418 occupation-related sentences—annotated with ternary gender ground truth (masculine/feminine/inclusive) and metadata including stereotype strength and grammatical gender ambiguity. Departing from conventional binary frameworks, we introduce inclusive language and ternary labeling as systematic components in MT evaluation for the first time. Leveraging the Hugging Face/GitHub ecosystem, we assess Gemma2-2B, Mistral-7B, Llama3.1-8B, and Llama3.3-70B under zero-shot and few-shot prompting. All models exhibit significant gender bias, with particularly severe misrendering of inclusive references—largely attributable to French’s grammatical gender marking. The dataset and evaluation code are fully open-sourced to enable reproducible fairness attribution analysis.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly leveraged for translation tasks but often fall short when translating inclusive language -- such as texts containing the singular 'they' pronoun or otherwise reflecting fair linguistic protocols. Because these challenges span both computational and societal domains, it is imperative to critically evaluate how well LLMs handle inclusive translation with a well-founded framework. This paper presents FairTranslate, a novel, fully human-annotated dataset designed to evaluate non-binary gender biases in machine translation systems from English to French. FairTranslate includes 2418 English-French sentence pairs related to occupations, annotated with rich metadata such as the stereotypical alignment of the occupation, grammatical gender indicator ambiguity, and the ground-truth gender label (male, female, or inclusive). We evaluate four leading LLMs (Gemma2-2B, Mistral-7B, Llama3.1-8B, Llama3.3-70B) on this dataset under different prompting procedures. Our results reveal substantial biases in gender representation across LLMs, highlighting persistent challenges in achieving equitable outcomes in machine translation. These findings underscore the need for focused strategies and interventions aimed at ensuring fair and inclusive language usage in LLM-based translation systems. We make the FairTranslate dataset publicly available on Hugging Face, and disclose the code for all experiments on GitHub.
Problem

Research questions and friction points this paper is trying to address.

Evaluating gender bias in English-French machine translation
Assessing LLM performance on inclusive language translation
Addressing non-binary gender representation challenges in translation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-annotated dataset for gender bias evaluation
Evaluates non-binary biases in English-French translation
Tests LLMs with diverse prompting for inclusive language
🔎 Similar Papers
No similar papers found.