π€ AI Summary
This study identifies a significant Western cultural bias in current LLM evaluations for sexual and reproductive health (SRH): mainstream benchmarks (e.g., HealthBench) are grounded in Euro-American norms and thus inadequately reflect model performance in non-Western contexts such as India. To address this, we propose the first localized evaluation framework for SRH that jointly ensures medical accuracy and cultural appropriateness, integrating automated scoring (adapted from HealthBench) with qualitative, expert-led public health assessment. Evaluated on 330 single-turn Indian SRH dialogues, automated metrics consistently assigned low scores; however, expert review confirmed that most responses were both culturally adapted and clinically soundβthereby exposing the cultural misalignment inherent in existing benchmarks. Our contribution lies in systematically diagnosing this evaluation bias and establishing a methodological paradigm and practical pathway for health AI assessment in Global South contexts.
π Abstract
Large Language Models (LLMs) have been positioned as having the potential to expand access to health information in the Global South, yet their evaluation remains heavily dependent on benchmarks designed around Western norms. We present insights from a preliminary benchmarking exercise with a chatbot for sexual and reproductive health (SRH) for an underserved community in India. We evaluated using HealthBench, a benchmark for conversational health models by OpenAI. We extracted 637 SRH queries from the dataset and evaluated on the 330 single-turn conversations. Responses were evaluated using HealthBench's rubric-based automated grader, which rated responses consistently low. However, qualitative analysis by trained annotators and public health experts revealed that many responses were actually culturally appropriate and medically accurate. We highlight recurring issues, particularly a Western bias, such as for legal framing and norms (e.g., breastfeeding in public), diet assumptions (e.g., fish safe to eat during pregnancy), and costs (e.g., insurance models). Our findings demonstrate the limitations of current benchmarks in capturing the effectiveness of systems built for different cultural and healthcare contexts. We argue for the development of culturally adaptive evaluation frameworks that meet quality standards while recognizing needs of diverse populations.