Different Speech Translation Models Encode and Translate Speaker Gender Differently

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates implicit gender encoding disparities in speech translation (ST) models and their resulting gender assignment bias. To address this problem, we conduct probe-based interpretability analysis, comparative experiments across multilingual end-to-end and adapter-coupled ST models (EN→FR/IT/ES), and quantitative evaluation of gender feature decoding and bias magnitude. Our results reveal that conventional encoder-decoder architectures retain substantial speaker gender information from speech inputs, whereas modern adapter-based architectures exhibit near-zero gender encoding—yet induce a systematic “male-default” translation bias, especially in grammatically gender-marking Romance languages. This work establishes, for the first time, an analytical framework linking gender interpretability to translation fairness. It demonstrates that model lightweighting—via architectural simplification such as adapter integration—may compromise gender fairness, thereby providing both theoretical insight and empirical evidence for developing interpretable and equitable ST systems.

Technology Category

Application Category

📝 Abstract
Recent studies on interpreting the hidden states of speech models have shown their ability to capture speaker-specific features, including gender. Does this finding also hold for speech translation (ST) models? If so, what are the implications for the speaker's gender assignment in translation? We address these questions from an interpretability perspective, using probing methods to assess gender encoding across diverse ST models. Results on three language directions (English-French/Italian/Spanish) indicate that while traditional encoder-decoder models capture gender information, newer architectures -- integrating a speech encoder with a machine translation system via adapters -- do not. We also demonstrate that low gender encoding capabilities result in systems' tendency toward a masculine default, a translation bias that is more pronounced in newer architectures.
Problem

Research questions and friction points this paper is trying to address.

Assess gender encoding in speech translation models
Compare traditional and newer ST model architectures
Analyze masculine default bias in gender translation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probing methods assess gender encoding
Speech encoder integrates with MT system
Newer architectures show masculine default bias
🔎 Similar Papers
No similar papers found.