Script Gap: Evaluating LLM Triage on Indian Languages in Native vs Roman Scripts in a Real World Setting

📅 2025-12-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies a critical “script gap” in large language models (LLMs) for maternal and child health triage in Indian languages: Romanized input—common in real-world low-resource settings—reduces F1 scores by 5–12 percentage points versus native-script input, increasing misclassification rates; extrapolated to partner institutions’ annual caseloads, this may cause nearly two million additional erroneous triage decisions. Method: We benchmarked multiple LLMs (GPT-4, Claude, Llama, etc.) on clinically annotated, user-generated queries in five Indian languages (Hindi, Telugu, etc.) and Nepali, employing fine-grained error attribution and semantic intent consistency analysis. Contribution/Results: We provide the first empirical evidence that performance degradation stems primarily from orthographic noise-induced output fragility—not semantic misunderstanding—highlighting a fundamental robustness limitation in script-variant inputs. These findings establish crucial empirical grounding for evaluating and deploying LLMs in low-resource, multilingual clinical contexts.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly deployed in high-stakes clinical applications in India. In many such settings, speakers of Indian languages frequently communicate using romanized text rather than native scripts, yet existing research rarely evaluates this orthographic variation using real-world data. We investigate how romanization impacts the reliability of LLMs in a critical domain: maternal and newborn healthcare triage. We benchmark leading LLMs on a real-world dataset of user-generated queries spanning five Indian languages and Nepali. Our results reveal consistent degradation in performance for romanized messages, with F1 scores trailing those of native scripts by 5-12 points. At our partner maternal health organization in India, this gap could cause nearly 2 million excess errors in triage. Crucially, this performance gap by scripts is not due to a failure in clinical reasoning. We demonstrate that LLMs often correctly infer the semantic intent of romanized queries. Nevertheless, their final classification outputs remain brittle in the presence of orthographic noise in romanized inputs. Our findings highlight a critical safety blind spot in LLM-based health systems: models that appear to understand romanized input may still fail to act on it reliably.
Problem

Research questions and friction points this paper is trying to address.

Evaluates LLM performance degradation in romanized Indian language texts
Investigates script impact on maternal healthcare triage reliability
Highlights safety risks in LLM-based health systems due to orthographic noise
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmarking LLMs on real-world romanized Indian language queries
Identifying performance degradation in romanized vs native script triage
Highlighting safety gaps in LLM-based clinical systems due to orthographic noise
🔎 Similar Papers
No similar papers found.
M
Manurag Khullar
School of Computing and Information, University of Pittsburgh
U
Utkarsh Desai
School of Computing and Information, University of Pittsburgh
P
Poorva Malviya
School of Computing and Information, University of Pittsburgh
Aman Dalmia
Aman Dalmia
Lifelong learner
Machine LearningArtificial IntelligenceClimate ChangeSocial GoodEducation
Zheyuan Ryan Shi
Zheyuan Ryan Shi
University of Pittsburgh
AI for Social GoodMachine LearningGame TheoryReinforcement Learning