🤖 AI Summary
This study addresses the significant heterogeneity in dialectal Arabic speech data—spanning domain coverage, dialect annotation, and recording conditions—which impedes cross-dataset comparison and model evaluation. For the first time, it systematically quantifies the “dialectness” and audio quality of such resources, integrating 31 datasets covering 14 dialects into a unified framework. The work proposes a standardized descriptive methodology that transcends coarse-grained labels and establishes a reproducible evaluation platform through computational linguistic analysis, proxy metrics for audio quality, and modern ASR benchmarking. The project not only exposes inconsistencies in acoustic conditions and dialectal signals across existing datasets but also delivers strong baselines and standardized metadata to support robust automatic speech recognition for dialectal Arabic.
📝 Abstract
Dialectal Arabic (DA) speech data vary widely in domain coverage, dialect labeling practices, and recording conditions, complicating cross-dataset comparison and model evaluation. To characterize this landscape, we conduct a computational analysis of linguistic ``dialectness''alongside objective proxies of audio quality on the training splits of widely used DA corpora. We find substantial heterogeneity both in acoustic conditions and in the strength and consistency of dialectal signals across datasets, underscoring the need for standardized characterization beyond coarse labels. To reduce fragmentation and support reproducible evaluation, we introduce Arab Voices, a standardized framework for DA ASR. Arab Voices provides unified access to 31 datasets spanning 14 dialects, with harmonized metadata and evaluation utilities. We further benchmark a range of recent ASR systems, establishing strong baselines for modern DA ASR.