🤖 AI Summary
This study investigates how speech translation (ST) models infer speaker gender from acoustic signals and map it onto target-language grammatical gender systems to mitigate gender mistranslation caused by pitch bias or language models’ male-default preferences. Methodologically, we propose a multilingual (en→es/fr/it) experimental framework integrating contrastive feature attribution analysis with internal representation probing—first revealing that models rely not solely on pitch but on full-spectrum acoustic distributions, dynamically linking acoustic features to gendered lexical items via first-person pronouns. Key contributions include: (1) identifying a novel fine-grained gender inference mechanism; (2) demonstrating that high-quality ST models effectively compensate for gender biases inherent in downstream language models; and (3) validating that distributed spectral utilization significantly improves cross-lingual gender reference accuracy.
📝 Abstract
Unlike text, speech conveys information about the speaker, such as gender, through acoustic cues like pitch. This gives rise to modality-specific bias concerns. For example, in speech translation (ST), when translating from languages with notional gender, such as English, into languages where gender-ambiguous terms referring to the speaker are assigned grammatical gender, the speaker's vocal characteristics may play a role in gender assignment. This risks misgendering speakers, whether through masculine defaults or vocal-based assumptions. Yet, how ST models make these decisions remains poorly understood. We investigate the mechanisms ST models use to assign gender to speaker-referring terms across three language pairs (en-es/fr/it), examining how training data patterns, internal language model (ILM) biases, and acoustic information interact. We find that models do not simply replicate term-specific gender associations from training data, but learn broader patterns of masculine prevalence. While the ILM exhibits strong masculine bias, models can override these preferences based on acoustic input. Using contrastive feature attribution on spectrograms, we reveal that the model with higher gender accuracy relies on a previously unknown mechanism: using first-person pronouns to link gendered terms back to the speaker, accessing gender information distributed across the frequency spectrum rather than concentrated in pitch.