🤖 AI Summary
Automatic speech recognition (ASR) for African-accented English exhibits severe performance degradation—exceeding a 10% WER increase—in critical domains such as healthcare, adversely impacting downstream clinical summarization quality. Method: We introduce the first African-accented spontaneous dialogue benchmark, comprising 50 extended simulated medical and everyday conversations, enabling joint evaluation of ASR, speaker diarization, and clinical summarization. Contribution/Results: We systematically expose robustness deficiencies of state-of-the-art ASR models on African accents; establish a quantitative framework linking ASR errors to summarization fidelity; and demonstrate that 37% of clinically critical information is lost due to transcription errors. This work provides foundational data, an inclusive evaluation paradigm, and empirical evidence to advance low-resource accent modeling and equitable conversational AI in the Global South.
📝 Abstract
Speech technologies are transforming interactions across various sectors, from healthcare to call centers and robots, yet their performance on African-accented conversations remains underexplored. We introduce Afrispeech-Dialog, a benchmark dataset of 50 simulated medical and non-medical African-accented English conversations, designed to evaluate automatic speech recognition (ASR) and related technologies. We assess state-of-the-art (SOTA) speaker diarization and ASR systems on long-form, accented speech, comparing their performance with native accents and discover a 10%+ performance degradation. Additionally, we explore medical conversation summarization capabilities of large language models (LLMs) to demonstrate the impact of ASR errors on downstream medical summaries, providing insights into the challenges and opportunities for speech technologies in the Global South. Our work highlights the need for more inclusive datasets to advance conversational AI in low-resource settings.