Afrispeech-Dialog: A Benchmark Dataset for Spontaneous English Conversations in Healthcare and Beyond

📅 2025-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Automatic speech recognition (ASR) for African-accented English exhibits severe performance degradation—exceeding a 10% WER increase—in critical domains such as healthcare, adversely impacting downstream clinical summarization quality. Method: We introduce the first African-accented spontaneous dialogue benchmark, comprising 50 extended simulated medical and everyday conversations, enabling joint evaluation of ASR, speaker diarization, and clinical summarization. Contribution/Results: We systematically expose robustness deficiencies of state-of-the-art ASR models on African accents; establish a quantitative framework linking ASR errors to summarization fidelity; and demonstrate that 37% of clinically critical information is lost due to transcription errors. This work provides foundational data, an inclusive evaluation paradigm, and empirical evidence to advance low-resource accent modeling and equitable conversational AI in the Global South.

Technology Category

Application Category

📝 Abstract
Speech technologies are transforming interactions across various sectors, from healthcare to call centers and robots, yet their performance on African-accented conversations remains underexplored. We introduce Afrispeech-Dialog, a benchmark dataset of 50 simulated medical and non-medical African-accented English conversations, designed to evaluate automatic speech recognition (ASR) and related technologies. We assess state-of-the-art (SOTA) speaker diarization and ASR systems on long-form, accented speech, comparing their performance with native accents and discover a 10%+ performance degradation. Additionally, we explore medical conversation summarization capabilities of large language models (LLMs) to demonstrate the impact of ASR errors on downstream medical summaries, providing insights into the challenges and opportunities for speech technologies in the Global South. Our work highlights the need for more inclusive datasets to advance conversational AI in low-resource settings.
Problem

Research questions and friction points this paper is trying to address.

Benchmark dataset for African-accented English
Evaluate ASR in medical conversations
Impact of ASR errors on summaries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Afrispeech-Dialog benchmark dataset
State-of-the-art speaker diarization
Medical conversation summarization capabilities
🔎 Similar Papers
No similar papers found.
Mardhiyah Sanni
Mardhiyah Sanni
Research Assistant, University of Edinburgh
Tassallah Abdullahi
Tassallah Abdullahi
Brown University
Natural Language ProcessingInformation RetrievalDigital Health
D
D. Kayande
Intron, BioRAMP, Indian Institute of Information Technology Allahabad
Emmanuel Ayodele
Emmanuel Ayodele
Clinical Data Quality Manager
Health InformaticsHealth Data ScienceAI/Machine Learning in HealthcareDigital Health
Naome A. Etori
Naome A. Etori
Department of Computer Science and Engineering, University of Minnesota-Twin Cities
AINLPHealthcareHCIComputational Social Science
M
Michael S. Mollel
BioRAMP, University of Glasgow
M
Moshood Yekini
BioRAMP
C
Chibuzor Okocha
BioRAMP, University of Florida
L
L. Ismaila
BioRAMP, Johns Hopkins University
F
Folafunmi Omofoye
BioRAMP, University of North Carolina at Chapel Hill
B
B. A. Adewale
BioRAMP
Tobi Olatunji
Tobi Olatunji
Research Scientist, Amazon Web Services
Clinical Natural Language Processing