🤖 AI Summary
Arabic dialect identification faces dual challenges of data scarcity and dialectal diversity in low-resource settings. This paper proposes a hybrid modeling paradigm integrating traditional signal processing with deep learning, and presents the first systematic comparison of two feature–model combinations: MFCC-CNN and DWT-RNN. Experimental results demonstrate that MFCC-derived spectral features paired with CNN-based modeling substantially outperform DWT-RNN (91.2% vs. 66.5% accuracy), establishing a strong baseline for low-resource Arabic dialect identification. Evaluated on the Common Voice Arabic subset, the MFCC-CNN model achieves both high accuracy and high F1-score, empirically validating the effectiveness of spectral modeling. These findings provide a clear foundation for future work incorporating self-supervised learning and Transformer-based architectures.
📝 Abstract
Arabic dialect recognition presents a significant challenge in speech technology due to the linguistic diversity of Arabic and the scarcity of large annotated datasets, particularly for underrepresented dialects. This research investigates hybrid modeling strategies that integrate classical signal processing techniques with deep learning architectures to address this problem in low-resource scenarios. Two hybrid models were developed and evaluated: (1) Mel-Frequency Cepstral Coefficients (MFCC) combined with a Convolutional Neural Network (CNN), and (2) Discrete Wavelet Transform (DWT) features combined with a Recurrent Neural Network (RNN). The models were trained on a dialect-filtered subset of the Common Voice Arabic dataset, with dialect labels assigned based on speaker metadata. Experimental results demonstrate that the MFCC + CNN architecture achieved superior performance, with an accuracy of 91.2% and strong precision, recall, and F1-scores, significantly outperforming the Wavelet + RNN configuration, which achieved an accuracy of 66.5%. These findings highlight the effectiveness of leveraging spectral features with convolutional models for Arabic dialect recognition, especially when working with limited labeled data. The study also identifies limitations related to dataset size, potential regional overlaps in labeling, and model optimization, providing a roadmap for future research. Recommendations for further improvement include the adoption of larger annotated corpora, integration of self-supervised learning techniques, and exploration of advanced neural architectures such as Transformers. Overall, this research establishes a strong baseline for future developments in Arabic dialect recognition within resource-constrained environments.