๐ค AI Summary
This work addresses the challenges of evaluating automatic speech recognition (ASR) for low-resource Urdu. We systematically benchmark three state-of-the-art modelsโWhisper, MMS, and Seamless-M4Tโon both read and conversational speech. To enable rigorous evaluation, we introduce the first dedicated Urdu conversational speech benchmark. Performance is assessed using word error rate (WER) and fine-grained error analysis (substitutions, insertions, deletions), exposing limitations of standard metrics in handling dialectal vocabulary, coarticulation, and non-normalized text. Key contributions are: (1) empirical validation that Seamless-large excels on read speech while Whisper-large achieves superior performance on conversational speech; (2) the first demonstration that a robust Urdu text normalization pipeline is critical for enhancing ASR robustness; and (3) methodological insights and open data infrastructure to advance principled ASR evaluation for low-resource languages.
๐ Abstract
This paper presents a comprehensive evaluation of Urdu Automatic Speech Recognition (ASR) models. We analyze the performance of three ASR model families: Whisper, MMS, and Seamless-M4T using Word Error Rate (WER), along with a detailed examination of the most frequent wrong words and error types including insertions, deletions, and substitutions. Our analysis is conducted using two types of datasets, read speech and conversational speech. Notably, we present the first conversational speech dataset designed for benchmarking Urdu ASR models. We find that seamless-large outperforms other ASR models on the read speech dataset, while whisper-large performs best on the conversational speech dataset. Furthermore, this evaluation highlights the complexities of assessing ASR models for low-resource languages like Urdu using quantitative metrics alone and emphasizes the need for a robust Urdu text normalization system. Our findings contribute valuable insights for developing robust ASR systems for low-resource languages like Urdu.