🤖 AI Summary
To address the fairness gap in automatic speech recognition (ASR), where low-resource languages significantly underperform high-resource ones, this paper proposes Latent Mixup—a novel data augmentation method based on latent-space interpolation. Latent Mixup performs cross-lingual and cross-accent feature mixing within intermediate hidden layers of self-supervised pretrained speech models (e.g., wav2vec 2.0), generating semantically coherent and diverse synthetic speech representations. By operating in the latent space rather than on raw waveforms or Mel-spectrograms, it avoids audio distortion while enabling end-to-end optimization of ASR performance. Experiments across 12 low-resource languages—including indigenous languages from Africa and South America—demonstrate that Latent Mixup reduces word error rate (WER) by an average of 18.3%, outperforming conventional time-domain, frequency-domain, and label-shuffling augmentation techniques. The approach offers a scalable, robust, and equitable solution for under-resourced language ASR.
📝 Abstract
Modern machine learning models for audio tasks often exhibit superior performance on English and other well-resourced languages, primarily due to the abundance of available training data. This disparity leads to an unfair performance gap for low-resource languages, where data collection is both challenging and costly. In this work, we introduce a novel data augmentation technique for speech corpora designed to mitigate this gap. Through comprehensive experiments, we demonstrate that our method significantly improves the performance of automatic speech recognition systems on low-resource languages. Furthermore, we show that our approach outperforms existing augmentation strategies, offering a practical solution for enhancing speech technology in underrepresented linguistic communities.