🤖 AI Summary
Multilingual self-supervised speech recognition (SSL-ASR) models (e.g., XLS-R) exhibit implicit language bias, where fine-tuning performance depends more on pretraining data volume from high-resource languages than on linguistic priors.
Method: We introduce the Lottery Ticket Hypothesis (LTH) to SSL-ASR for the first time, proposing a language-specific subnetwork identification framework that integrates multilingual zero-shot transfer evaluation with fine-grained weight attribution analysis.
Contribution/Results: Our empirical analysis reveals that during fine-tuning, XLS-R predominantly reuses weights learned dominantly from high-resource languages, leading to significant degradation of low-resource language subnetworks. This work provides the first systematic demonstration of a data-scale-driven language bias mechanism in SSL-ASR, uncovering previously overlooked harms of data imbalance. It offers theoretical insights and interpretable diagnostic tools for fair, robust multilingual speech modeling.
📝 Abstract
Self-supervised learning (SSL) is used in deep learning to train on large datasets without the need for expensive labelling of the data. Recently, large Automatic Speech Recognition (ASR) models such as XLS-R have utilised SSL to train on over one hundred different languages simultaneously. However, deeper investigation shows that the bulk of the training data for XLS-R comes from a small number of languages. Biases learned through SSL have been shown to exist in multiple domains, but language bias in multilingual SSL ASR has not been thoroughly examined. In this paper, we utilise the Lottery Ticket Hypothesis (LTH) to identify language-specific subnetworks within XLS-R and test the performance of these subnetworks on a variety of different languages. We are able to show that when fine-tuning, XLS-R bypasses traditional linguistic knowledge and builds only on weights learned from the languages with the largest data contribution to the pretraining data.