π€ AI Summary
This work addresses the NADI multilingual Arabic dialect speech processing task, tackling two core challenges: Arabic Dialect Identification (ADI) and multilingual Automatic Speech Recognition (ASR). We propose a joint optimization framework based on large-model fine-tuning and dialect-specific data augmentation. For ADI, we employ the Whisper-large-v3 encoder with dialect-aware data augmentation to achieve end-to-end dialect classification. For ASR, we fine-tune the SeamlessM4T-v2 Large model separately on each of eight Arabic dialects to enhance cross-dialect robustness. Our approach significantly outperforms baselines: achieving 79.83% accuracy on ADI (ranked first), and average WER/CER of 38.54%/14.53% on ASR (ranked second). The key contribution lies in empirically validating that dialect-specific fine-tuning combined with domain-adaptive data augmentation substantially improves low-resource multilingual speech modeling performance.
π Abstract
This paper describes Elyadata &LIA's joint submission to the NADI multi-dialectal Arabic Speech Processing 2025. We participated in the Spoken Arabic Dialect Identification (ADI) and multi-dialectal Arabic ASR subtasks. Our submission ranked first for the ADI subtask and second for the multi-dialectal Arabic ASR subtask among all participants. Our ADI system is a fine-tuned Whisper-large-v3 encoder with data augmentation. This system obtained the highest ADI accuracy score of extbf{79.83%} on the official test set. For multi-dialectal Arabic ASR, we fine-tuned SeamlessM4T-v2 Large (Egyptian variant) separately for each of the eight considered dialects. Overall, we obtained an average WER and CER of extbf{38.54%} and extbf{14.53%}, respectively, on the test set. Our results demonstrate the effectiveness of large pre-trained speech models with targeted fine-tuning for Arabic speech processing.