🤖 AI Summary
Existing quality estimation (QE) metrics for machine translation suffer from limited investigation into gender bias due to small-scale data, narrow occupational coverage, and monolingual focus.
Method: We introduce the first large-scale, multilingual QE gender bias challenge set, covering 33 source–target language pairs derived from the GAMBIT corpus. It employs gender-balanced dual-version target texts and grammatical consistency adjustment, organized in a fully aligned parallel structure.
Contribution/Results: This design enables, for the first time, fine-grained, cross-lingual, occupation-aware gender bias analysis in QE, extending bias evaluation to multilingual and multi-gender frameworks. Experiments reveal statistically significant gendered scoring biases across mainstream QE metrics in diverse languages. The challenge set provides a reproducible benchmark and methodological paradigm for fairness assessment in QE.
📝 Abstract
Gender bias in machine translation (MT) systems has been extensively documented, but bias in automatic quality estimation (QE) metrics remains comparatively underexplored. Existing studies suggest that QE metrics can also exhibit gender bias, yet most analyses are limited by small datasets, narrow occupational coverage, and restricted language variety. To address this gap, we introduce a large-scale challenge set specifically designed to probe the behavior of QE metrics when evaluating translations containing gender-ambiguous occupational terms. Building on the GAMBIT corpus of English texts with gender-ambiguous occupations, we extend coverage to three source languages that are genderless or natural-gendered, and eleven target languages with grammatical gender, resulting in 33 source-target language pairs. Each source text is paired with two target versions differing only in the grammatical gender of the occupational term(s) (masculine vs. feminine), with all dependent grammatical elements adjusted accordingly. An unbiased QE metric should assign equal or near-equal scores to both versions. The dataset's scale, breadth, and fully parallel design, where the same set of texts is aligned across all languages, enables fine-grained bias analysis by occupation and systematic comparisons across languages.