Whispering in Amharic: Fine-tuning Whisper for Low-resource Language

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the suboptimal automatic speech recognition (ASR) performance of the Whisper model on low-resource Amharic. To tackle this, we propose a fine-tuning approach integrating multi-source speech data with homophone normalization. Methodologically, we perform supervised fine-tuning on the Whisper-small architecture using the first joint combination of FLEURS, Mozilla Common Voice, and BDU-speech Amharic datasets; additionally, we introduce language-specific homophone normalization as a text post-processing step to mitigate morphological ambiguity. Experimental results show that our fine-tuned model, Whisper-small-am, achieves a substantial reduction in word error rate (WER) and a concurrent improvement in BLEU score on the Amharic test set. These gains validate the effectiveness of cross-dataset mixed fine-tuning and linguistically informed text normalization for low-resource ASR. The proposed framework offers a reproducible, lightweight, and effective optimization paradigm for low-resource speech recognition.

Technology Category

Application Category

📝 Abstract
This work explores fine-tuning OpenAI's Whisper automatic speech recognition (ASR) model for Amharic, a low-resource language, to improve transcription accuracy. While the foundational Whisper model struggles with Amharic due to limited representation in its training data, we fine-tune it using datasets like Mozilla Common Voice, FLEURS, and the BDU-speech dataset. The best-performing model, Whispersmall-am, significantly improves when finetuned on a mix of existing FLEURS data and new, unseen Amharic datasets. Training solely on new data leads to poor performance, but combining it with FLEURS data reinforces the model, enabling better specialization in Amharic. We also demonstrate that normalizing Amharic homophones significantly enhances Word Error Rate (WER) and Bilingual Evaluation Understudy (BLEU) scores. This study underscores the importance of fine-tuning strategies and dataset composition for improving ASR in low-resource languages, providing insights for future Amharic speech recognition research.
Problem

Research questions and friction points this paper is trying to address.

Improving ASR accuracy for low-resource Amharic language
Fine-tuning Whisper with mixed datasets enhances performance
Normalizing homophones boosts WER and BLEU scores
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning Whisper for Amharic ASR
Combining FLEURS with new datasets
Normalizing homophones improves WER and BLEU
🔎 Similar Papers