AfriMed-QA: A Pan-African, Multi-Specialty, Medical Question-Answering Benchmark Dataset

📅 2024-11-23
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) lack rigorous evaluation in African healthcare contexts, raising concerns about their clinical reliability, equity, and contextual validity. Method: We introduce AfriMed-QA—the first pan-African, multi-specialty, academic-grade English medical QA benchmark—comprising 15,000 expert-annotated questions spanning 16 countries, 32 specialties, and 60+ medical schools, coupled with a multidimensional evaluation framework assessing correctness, bias, and explainability. Contribution/Results: Our systematic analysis reveals significant performance gaps, geographic biases, and specialty imbalances in LLMs under African conditions: state-of-the-art models underperform substantially on AfriMed-QA relative to USMLE benchmarks; biomedical-specialized models outperform general-purpose LLMs; and lightweight edge models consistently fail basic competency thresholds. Surprisingly, human expert evaluators preferred LLM-generated answers over those provided by practicing clinicians—a finding that challenges conventional clinical QA assessment paradigms. This work establishes a foundational benchmark and methodology for evaluating AI fairness and clinical utility in resource-constrained settings.

Technology Category

Application Category

📝 Abstract
Recent advancements in large language model(LLM) performance on medical multiple choice question (MCQ) benchmarks have stimulated interest from healthcare providers and patients globally. Particularly in low-and middle-income countries (LMICs) facing acute physician shortages and lack of specialists, LLMs offer a potentially scalable pathway to enhance healthcare access and reduce costs. However, their effectiveness in the Global South, especially across the African continent, remains to be established. In this work, we introduce AfriMed-QA, the first large scale Pan-African English multi-specialty medical Question-Answering (QA) dataset, 15,000 questions (open and closed-ended) sourced from over 60 medical schools across 16 countries, covering 32 medical specialties. We further evaluate 30 LLMs across multiple axes including correctness and demographic bias. Our findings show significant performance variation across specialties and geographies, MCQ performance clearly lags USMLE (MedQA). We find that biomedical LLMs underperform general models and smaller edge-friendly LLMs struggle to achieve a passing score. Interestingly, human evaluations show a consistent consumer preference for LLM answers and explanations when compared with clinician answers.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
AfriMed-QA
African Healthcare
Innovation

Methods, ideas, or system contributions that make the work stand out.

AfriMed-QA Dataset
Medical Knowledge Assessment
Model Performance Comparison
🔎 Similar Papers
No similar papers found.
Tobi Olatunji
Tobi Olatunji
Research Scientist, Amazon Web Services
Clinical Natural Language Processing
C
Charles Nimo
Georgia Institute of Technology
Abraham Owodunni
Abraham Owodunni
The Ohio State University
Multilingual NLPLow-resource NLPEfficient ML
Tassallah Abdullahi
Tassallah Abdullahi
Brown University
Natural Language ProcessingInformation RetrievalDigital Health
Emmanuel Ayodele
Emmanuel Ayodele
Clinical Data Quality Manager
Health InformaticsHealth Data ScienceAI/Machine Learning in HealthcareDigital Health
Mardhiyah Sanni
Mardhiyah Sanni
Research Assistant, University of Edinburgh
C
Chinemelu Aka
Intron
F
Folafunmi Omofoye
BioRAMP
F
Foutse Yuehgoh
BioRAMP
T
Timothy Faniran
BioRAMP
Bonaventure F. P. Dossou
Bonaventure F. P. Dossou
Deep Learning and NLP Research at McGill University & Mila Quebec AI Institute
Low-Resource NLPLanguage ModelingMultilingualismDrug DiscoveryMachine Learning for Health
M
Moshood Yekini
BioRAMP
J
Jonas Kemp
Google Research
Katherine Heller
Katherine Heller
Google Research
Machine LearningHealth AIEthical AI
J
Jude Chidubem Omeke
BioRAMP
C
Chidi Asuzu
BioRAMP
Naome A. Etori
Naome A. Etori
Department of Computer Science and Engineering, University of Minnesota-Twin Cities
AINLPHealthcareHCIComputational Social Science
A
Aim'erou Ndiaye
Masakhane
I
Ifeoma Okoh
Masakhane
E
E. Ocansey
Masakhane
W
Wendy Kinara
Kenyatta University
Michael Best
Michael Best
Georgia Institute of Technology
Irfan Essa
Irfan Essa
Distinguished Professor of Computing, Georgia Tech / Research Scientist, Google
Computer VisionArtificial IntelligenceMachine LearningComputer GraphicsRobotics
S
Stephen Edward Moore
University of Cape Coast
C
Chris Fourie
SisonkeBiotik
M
Mercy Nyamewaa Asiedu
Google Research