Whisper Turns Stronger: Augmenting Wav2Vec 2.0 for Superior ASR in Low-Resource Languages

📅 2024-12-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of dialectal diversity, accent variability, and scarce labeled data in automatic speech recognition (ASR) for low-resource languages—such as Arabic, Russian, and Portuguese—this paper proposes an end-to-end enhancement framework built upon Wav2Vec 2.0. The method innovatively integrates time-frequency domain data augmentation, contrastive learning-guided feature alignment, and cross-dialect pronunciation regularization in a deeply coupled manner, while introducing the first robust modeling of diacritics for such languages. Evaluated on the Common Voice subsets for Arabic, Russian, and Portuguese, the framework achieves average relative reductions of 33.9% in word error rate (WER) and 53.2% in character error rate (CER) compared to both the Wav2Vec 2.0 baseline and Whisper. These results demonstrate substantial improvements in generalization capability and transcription robustness for low-resource, multi-dialectal ASR scenarios.

Technology Category

Application Category

📝 Abstract
Approaching Speech-to-Text and Automatic Speech Recognition problems in low-resource languages is notoriously challenging due to the scarcity of validated datasets and the diversity of dialects. Arabic, Russian, and Portuguese exemplify these difficulties, being low-resource languages due to the many dialects of these languages across different continents worldwide. Moreover, the variety of accents and pronunciations of such languages complicate ASR models' success. With the increasing popularity of Deep Learning and Transformers, acoustic models like the renowned Wav2Vec2 have achieved superior performance in the Speech Recognition field compared to state-of-the-art approaches. However, despite Wav2Vec2's improved efficiency over traditional methods, its performance significantly declines for under-represented languages, even though it requires significantly less labeled data. This paper introduces an end-to-end framework that enhances ASR systems fine-tuned on Wav2Vec2 through data augmentation techniques. To validate our framework's effectiveness, we conducted a detailed experimental evaluation using three datasets from Mozilla's Common Voice project in Arabic, Russian, and Portuguese. Additionally, the framework presented in this paper demonstrates robustness to different diacritics. Ultimately, our approach outperforms two previous baseline models, which are the pre-trained Wav2Vec2 and the well-known Whisper ASR model, resulting in an average relative improvement of 33.9% in Word Error Rate and a 53.2% relative improvement in Character Error Rate.
Problem

Research questions and friction points this paper is trying to address.

Speech Recognition
Low-Resource Languages
Dialect Variation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhanced Wav2Vec 2.0
Speech Recognition Improvement
Diverse Dialects Support
O
O. H. Anidjar
School of Computer Science, College of Management, Rishon Le’Tzion, Israel; Department of Computer and Software Engineering, Ariel University, Ariel, Israel
Revital Marbel
Revital Marbel
Holon institute of technology (HIT)
computer science
Roi Yozevitch
Roi Yozevitch
Computer & Software Engineering, Electrical & Electronic Engineering, Ariel University
Applied MLBayesian FiltersASRNLP