Beyond the Rubric: Cultural Misalignment in LLM Benchmarks for Sexual and Reproductive Health

πŸ“… 2025-11-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study identifies a significant Western cultural bias in current LLM evaluations for sexual and reproductive health (SRH): mainstream benchmarks (e.g., HealthBench) are grounded in Euro-American norms and thus inadequately reflect model performance in non-Western contexts such as India. To address this, we propose the first localized evaluation framework for SRH that jointly ensures medical accuracy and cultural appropriateness, integrating automated scoring (adapted from HealthBench) with qualitative, expert-led public health assessment. Evaluated on 330 single-turn Indian SRH dialogues, automated metrics consistently assigned low scores; however, expert review confirmed that most responses were both culturally adapted and clinically soundβ€”thereby exposing the cultural misalignment inherent in existing benchmarks. Our contribution lies in systematically diagnosing this evaluation bias and establishing a methodological paradigm and practical pathway for health AI assessment in Global South contexts.

Technology Category

Application Category

πŸ“ Abstract
Large Language Models (LLMs) have been positioned as having the potential to expand access to health information in the Global South, yet their evaluation remains heavily dependent on benchmarks designed around Western norms. We present insights from a preliminary benchmarking exercise with a chatbot for sexual and reproductive health (SRH) for an underserved community in India. We evaluated using HealthBench, a benchmark for conversational health models by OpenAI. We extracted 637 SRH queries from the dataset and evaluated on the 330 single-turn conversations. Responses were evaluated using HealthBench's rubric-based automated grader, which rated responses consistently low. However, qualitative analysis by trained annotators and public health experts revealed that many responses were actually culturally appropriate and medically accurate. We highlight recurring issues, particularly a Western bias, such as for legal framing and norms (e.g., breastfeeding in public), diet assumptions (e.g., fish safe to eat during pregnancy), and costs (e.g., insurance models). Our findings demonstrate the limitations of current benchmarks in capturing the effectiveness of systems built for different cultural and healthcare contexts. We argue for the development of culturally adaptive evaluation frameworks that meet quality standards while recognizing needs of diverse populations.
Problem

Research questions and friction points this paper is trying to address.

Current LLM benchmarks exhibit Western cultural bias in health evaluations
Automated grading fails to recognize culturally appropriate medical responses
Need culturally adaptive frameworks for Global South healthcare contexts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Culturally adaptive evaluation frameworks for diverse populations
Qualitative analysis by health experts to assess appropriateness
Identifying Western bias in automated rubric-based grading
πŸ”Ž Similar Papers
2022-12-20North American Chapter of the Association for Computational LinguisticsCitations: 19
Sumon Kanti Dey
Sumon Kanti Dey
Emory University
Natural Language ProcessingSocial MediaMachine LearningDeep Learning
S
S. Manvi
Emory University, Atlanta, Georgia, USA
Z
Zeel Mehta
Myna Mahila Foundation, Mumbai, India
M
Meet Shah
Myna Mahila Foundation, Mumbai, India
U
Unnati Agrawal
Emory University, Atlanta, Georgia, USA
Suhani Jalota
Suhani Jalota
Hoover Institution, Stanford University, Stanford, California, USA
A
Azra Ismail
Emory University, Atlanta, Georgia, USA