Investigating Gender Stereotypes in Large Language Models via Social Determinants of Health

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the risk of gender bias propagation in large language models (LLMs) within sensitive domains such as healthcare, where existing bias evaluation frameworks often overlook the interactions among social determinants of health (SDoH) and their contextual dependencies. For the first time, this work employs multidimensional SDoH interactions as probing constructs, integrating prompt engineering and controlled probing experiments to systematically analyze stereotypical behaviors in prominent LLMs when gender co-occurs with other SDoH factors in French clinical texts. The findings demonstrate that LLMs activate gendered stereotypes in response to SDoH-related inputs, thereby confirming that incorporating SDoH interactions significantly enhances and refines current bias assessment methodologies.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) excel in Natural Language Processing (NLP) tasks, but they often propagate biases embedded in their training data, which is potentially impactful in sensitive domains like healthcare. While existing benchmarks evaluate biases related to individual social determinants of health (SDoH) such as gender or ethnicity, they often overlook interactions between these factors and lack context-specific assessments. This study investigates bias in LLMs by probing the relationships between gender and other SDoH in French patient records. Through a series of experiments, we found that embedded stereotypes can be probed using SDoH input and that LLMs rely on embedded stereotypes to make gendered decisions, suggesting that evaluating interactions among SDoH factors could usefully complement existing approaches to assessing LLM performance and bias.
Problem

Research questions and friction points this paper is trying to address.

gender stereotypes
large language models
social determinants of health
bias propagation
healthcare NLP
Innovation

Methods, ideas, or system contributions that make the work stand out.

social determinants of health
gender bias
large language models
bias probing
healthcare NLP
🔎 Similar Papers
No similar papers found.
T
Trung Hieu Ngo
Nantes Université, École Centrale Nantes, CNRS, LS2N, UMR 6004, F-44000 Nantes, France
A
Adrien Bazoge
Nantes Université, CHU Nantes, Clinique des données, INSERM, CIC 1413, Nantes, France
Solen Quiniou
Solen Quiniou
Nantes Université - LS2N
Natural language processingdata mininghandwriting recognitionhuman-computer interaction
P
Pierre-Antoine Gourraud
Nantes Université, CHU Nantes, Clinique des données, INSERM, CIC 1413, Nantes, France
E
Emmanuel Morin
Nantes Université, École Centrale Nantes, CNRS, LS2N, UMR 6004, F-44000 Nantes, France