GAMBIT+: A Challenge Set for Evaluating Gender Bias in Machine Translation Quality Estimation Metrics

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing quality estimation (QE) metrics for machine translation suffer from limited investigation into gender bias due to small-scale data, narrow occupational coverage, and monolingual focus. Method: We introduce the first large-scale, multilingual QE gender bias challenge set, covering 33 source–target language pairs derived from the GAMBIT corpus. It employs gender-balanced dual-version target texts and grammatical consistency adjustment, organized in a fully aligned parallel structure. Contribution/Results: This design enables, for the first time, fine-grained, cross-lingual, occupation-aware gender bias analysis in QE, extending bias evaluation to multilingual and multi-gender frameworks. Experiments reveal statistically significant gendered scoring biases across mainstream QE metrics in diverse languages. The challenge set provides a reproducible benchmark and methodological paradigm for fairness assessment in QE.

Technology Category

Application Category

📝 Abstract
Gender bias in machine translation (MT) systems has been extensively documented, but bias in automatic quality estimation (QE) metrics remains comparatively underexplored. Existing studies suggest that QE metrics can also exhibit gender bias, yet most analyses are limited by small datasets, narrow occupational coverage, and restricted language variety. To address this gap, we introduce a large-scale challenge set specifically designed to probe the behavior of QE metrics when evaluating translations containing gender-ambiguous occupational terms. Building on the GAMBIT corpus of English texts with gender-ambiguous occupations, we extend coverage to three source languages that are genderless or natural-gendered, and eleven target languages with grammatical gender, resulting in 33 source-target language pairs. Each source text is paired with two target versions differing only in the grammatical gender of the occupational term(s) (masculine vs. feminine), with all dependent grammatical elements adjusted accordingly. An unbiased QE metric should assign equal or near-equal scores to both versions. The dataset's scale, breadth, and fully parallel design, where the same set of texts is aligned across all languages, enables fine-grained bias analysis by occupation and systematic comparisons across languages.
Problem

Research questions and friction points this paper is trying to address.

Evaluating gender bias in machine translation quality estimation metrics
Addressing limitations of small datasets and narrow occupational coverage
Creating parallel translations with gender variations for bias analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale challenge set for gender bias evaluation
Extends GAMBIT corpus to 33 language pairs
Parallel design enables fine-grained bias analysis
🔎 Similar Papers
No similar papers found.
Giorgos Filandrianos
Giorgos Filandrianos
Postdoctoral researcher
Explainable AINLP
Orfeas Menis Mastromichalakis
Orfeas Menis Mastromichalakis
PhD Student, National Technical University of Athens
Explainable AIAI EthicsNLP
W
Wafaa Mohammed
University of Amsterdam, Netherlands
Giuseppe Attanasio
Giuseppe Attanasio
Postdoctoral Researcher, Instituto de Telecomunicações
AIFairnessTransparencySafety
C
Chrysoula Zerva
Instituto de Telecomunicações, Lisbon, Portugal; Instituto Superior Técnico, Universidade de Lisboa, Portugal; ELLIS Unit Lisbon, Portugal