🤖 AI Summary
This work addresses the pervasive gender bias in machine translation (MT) when translating from languages without grammatical gender (e.g., English) into morphologically gendered languages (e.g., Italian), where systems often default to masculine forms due to missing gender cues. To tackle this, we propose ConGA, a novel framework that introduces a fine-grained annotation scheme integrating semantic gender (male/female/ambiguous) with grammatical gender, alongside a cross-sentential entity coreference tracking mechanism. Leveraging this approach, we construct a gold-standard annotated resource on the gENder-IT dataset, revealing systematic issues in current MT systems—including overuse of masculine forms and inconsistent rendering of feminine expressions. Our benchmark offers a linguistically grounded, scalable foundation for evaluating and advancing gender-fair machine translation.
📝 Abstract
Handling gender across languages remains a persistent challenge for Machine Translation (MT) and Large Language Models (LLMs), especially when translating from gender-neutral languages into morphologically gendered ones, such as English to Italian. English largely omits grammatical gender, while Italian requires explicit agreement across multiple grammatical categories. This asymmetry often leads MT systems to default to masculine forms, reinforcing bias and reducing translation accuracy. To address this issue, we present the Contextual Gender Annotation (ConGA) framework, a linguistically grounded set of guidelines for word-level gender annotation. The scheme distinguishes between semantic gender in English through three tags, Masculine (M), Feminine (F), and Ambiguous (A), and grammatical gender realisation in Italian (Masculine (M), Feminine (F)), combined with entity-level identifiers for cross-sentence tracking. We apply ConGA to the gENder-IT dataset, creating a gold-standard resource for evaluating gender bias in translation. Our results reveal systematic masculine overuse and inconsistent feminine realisation, highlighting persistent limitations of current MT systems. By combining fine-grained linguistic annotation with quantitative evaluation, this work offers both a methodology and a benchmark for building more gender-aware and multilingual NLP systems.