What's in a Name? Auditing Large Language Models for Race and Gender Bias

📅 2024-02-21
🏛️ arXiv.org
📈 Citations: 21
Influential: 2
📄 PDF
🤖 AI Summary
This study systematically audits implicit racial and gender biases—particularly against Black women—in large language models (LLMs) such as GPT-4 within named-individual scenarios (e.g., car negotiation, election forecasting). We propose the first adversarial fairness auditing framework grounded in 42 structured prompt templates, enabling cross-model comparative analysis and quantitative attribution. Our empirical analysis reveals, for the first time, highly consistent race–gender interaction bias at the name level across models and templates. Crucially, we demonstrate that introducing numeric anchoring significantly mitigates bias, reducing adverse recommendation rates by up to 37%—challenging prevailing qualitative prompt-tuning practices. The methodology establishes a reproducible, scalable technical pathway for pre-deployment fairness evaluation of LLMs.

Technology Category

Application Category

📝 Abstract
We employ an audit design to investigate biases in state-of-the-art large language models, including GPT-4. In our study, we prompt the models for advice involving a named individual across a variety of scenarios, such as during car purchase negotiations or election outcome predictions. We find that the advice systematically disadvantages names that are commonly associated with racial minorities and women. Names associated with Black women receive the least advantageous outcomes. The biases are consistent across 42 prompt templates and several models, indicating a systemic issue rather than isolated incidents. While providing numerical, decision-relevant anchors in the prompt can successfully counteract the biases, qualitative details have inconsistent effects and may even increase disparities. Our findings underscore the importance of conducting audits at the point of LLM deployment and implementation to mitigate their potential for harm against marginalized communities.
Problem

Research questions and friction points this paper is trying to address.

Bias Evaluation
Large Language Models
Ethnic and Gender Bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bias Detection
Large Language Models
Fairness in AI
🔎 Similar Papers
No similar papers found.