Investigating Intersectional Bias in Large Language Models using Confidence Disparities in Coreference Resolution

📅 2025-08-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates intersectional bias—specifically, how large language models (LLMs) exhibit differential confidence in coreference resolution across intersecting demographic identities (e.g., gender × race × age), revealing omission-based societal harms. We propose *Core Antecedent Confidence Difference* (CACD), a novel metric quantifying confidence disparities across identity groups, and introduce WinoIdentity—a rigorously constructed benchmark extending WinoBias with 25 demographic dimensions—to systematically evaluate 50 intersectional bias patterns. Experiments on five state-of-the-art LLMs uncover up to 40% inter-group confidence gaps: models demonstrate markedly reduced confidence for doubly marginalized groups in counter-stereotypical contexts, while even privileged groups exhibit unexpected confidence drops. These findings indicate overreliance on memorized statistical associations rather than logical reasoning, exposing critical failures in both value alignment and inferential validity.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have achieved impressive performance, leading to their widespread adoption as decision-support tools in resource-constrained contexts like hiring and admissions. There is, however, scientific consensus that AI systems can reflect and exacerbate societal biases, raising concerns about identity-based harm when used in critical social contexts. Prior work has laid a solid foundation for assessing bias in LLMs by evaluating demographic disparities in different language reasoning tasks. In this work, we extend single-axis fairness evaluations to examine intersectional bias, recognizing that when multiple axes of discrimination intersect, they create distinct patterns of disadvantage. We create a new benchmark called WinoIdentity by augmenting the WinoBias dataset with 25 demographic markers across 10 attributes, including age, nationality, and race, intersected with binary gender, yielding 245,700 prompts to evaluate 50 distinct bias patterns. Focusing on harms of omission due to underrepresentation, we investigate bias through the lens of uncertainty and propose a group (un)fairness metric called Coreference Confidence Disparity which measures whether models are more or less confident for some intersectional identities than others. We evaluate five recently published LLMs and find confidence disparities as high as 40% along various demographic attributes including body type, sexual orientation and socio-economic status, with models being most uncertain about doubly-disadvantaged identities in anti-stereotypical settings. Surprisingly, coreference confidence decreases even for hegemonic or privileged markers, indicating that the recent impressive performance of LLMs is more likely due to memorization than logical reasoning. Notably, these are two independent failures in value alignment and validity that can compound to cause social harm.
Problem

Research questions and friction points this paper is trying to address.

Investigating intersectional bias in large language models
Measuring confidence disparities in coreference resolution tasks
Evaluating demographic fairness across 10 attributes in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends single-axis fairness to intersectional bias evaluation
Introduces WinoIdentity benchmark with 245,700 demographic prompts
Proposes Coreference Confidence Disparity metric for model uncertainty
🔎 Similar Papers
No similar papers found.