Enabling Scalable Evaluation of Bias Patterns in Medical LLMs

📅 2024-10-18
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing fairness evaluation of medical large language models (LLMs) relies on small, narrow, and poorly interpretable manually constructed test sets. Method: We propose the first medical-evidence-driven automated bias test generation framework, integrating SNOMED CT/ICD ontologies, biomedical knowledge graphs, and controllable text generation to build a domain-customized LLM evaluation framework. It enables modeling of sensitive attribute–health outcome dependencies, domain-specific bias characterization, and hallucination mitigation. Contributions/Results: (1) Significantly improved bias detection coverage and sensitivity; (2) Released MedBiasBench—the first open-source, large-scale medical bias benchmark dataset; (3) Enabled multi-disease analysis and deployed an online vignette generation demonstration system, advancing scalability, standardization, and interpretability in medical LLM fairness evaluation.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown impressive potential in helping with numerous medical challenges. Deploying LLMs in high-stakes applications such as medicine, however, brings in many concerns. One major area of concern relates to biased behaviors of LLMs in medical applications, leading to unfair treatment of individuals. To pave the way for the responsible and impactful deployment of Med LLMs, rigorous evaluation is a key prerequisite. Due to the huge complexity and variability of different medical scenarios, existing work in this domain has primarily relied on using manually crafted datasets for bias evaluation. In this study, we present a new method to scale up such bias evaluations by automatically generating test cases based on rigorous medical evidence. We specifically target the challenges of a) domain-specificity of bias characterization, b) hallucinating while generating the test cases, and c) various dependencies between the health outcomes and sensitive attributes. To that end, we offer new methods to address these challenges integrated with our generative pipeline, using medical knowledge graphs, medical ontologies, and customized general LLM evaluation frameworks in our method. Through a series of extensive experiments, we show that the test cases generated by our proposed method can effectively reveal bias patterns in Med LLMs at larger and more flexible scales than human-crafted datasets. We publish a large bias evaluation dataset using our pipeline, which is dedicated to a few medical case studies. A live demo of our application for vignette generation is available at https://vignette.streamlit.app. Our code is also available at https://github.com/healthylaife/autofair.
Problem

Research questions and friction points this paper is trying to address.

Evaluating bias in medical LLMs for fair treatment
Automating test case generation for scalable bias assessment
Addressing domain-specific bias and dependencies in health outcomes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatically generates test cases using medical evidence
Integrates medical knowledge graphs and ontologies
Customizes general LLM evaluation frameworks
🔎 Similar Papers
No similar papers found.