đ€ AI Summary
This study addresses potential gender bias in emergency department triage by proposing the first domain-agnostic counterfactual large language model (LLM) framework designed for human clinical decision support. Methodologically, it generates counterfactual patient pairs differing solely in genderâwhile rigorously controlling all other clinical featuresâto isolate and quantify genderâs causal effect on triage scores; it jointly models explicit and implicit gender cues from both structured variables and unstructured clinical text. Its key contribution is the first integration of counterfactual reasoning with LLMs for clinical bias detection. Evaluated on 150,000 real-world French ED cases, the framework reveals that women are systematically assigned 2.1% lower triage priority on averageâprojecting over 200,000 annual misclassifications in France. Critically, these findings replicate robustly across languages and healthcare systems, as validated on the English-language MIMIC-IV dataset.
đ Abstract
We present a novel, domain-agnostic counterfactual approach that uses Large Language Models (LLMs) to quantify gender disparities in human clinical decision-making. The method trains an LLM to emulate observed decisions, then evaluates counterfactual pairs in which only gender is flipped, estimating directional disparities while holding all other clinical factors constant. We study emergency triage, validating the approach on more than 150,000 admissions to the Bordeaux University Hospital (France) and replicating results on a subset of MIMIC-IV across a different language, population, and healthcare system. In the Bordeaux cohort, otherwise identical presentations were approximately 2.1% more likely to receive a lower-severity triage score when presented as female rather than male; scaled to national emergency volumes in France, this corresponds to more than 200,000 lower-severity assignments per year. Modality-specific analyses indicate that both explicit tabular gender indicators and implicit textual gender cues contribute to the disparity. Beyond emergency care, the approach supports bias audits in other settings (e.g., hiring, academic, and justice decisions), providing a scalable tool to detect and address inequities in real-world decision-making.