Attribution Bias in Large Language Models

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses systematic fairness biases in citation attribution by large language models (LLMs), which struggle to accurately credit authors across diverse racial, gender, and intersectional identities. To investigate this issue, the authors introduce AttriBench, the first benchmark dataset that balances both author prominence and demographic representation. They further propose a multi-prompt evaluation framework that uncovers “suppression”-type failure modes—where models omit attributions altogether—that conventional metrics fail to detect. Experiments across eleven prominent LLMs reveal significant disparities in attribution accuracy across demographic groups, with suppression behaviors unevenly distributed. These findings highlight critical fairness challenges in how current LLMs handle scholarly credit assignment, underscoring the need for more equitable evaluation and modeling practices in academic text generation.
📝 Abstract
As Large Language Models (LLMs) are increasingly used to support search and information retrieval, it is critical that they accurately attribute content to its original authors. In this work, we introduce AttriBench, the first fame- and demographically-balanced quote attribution benchmark dataset. Through explicitly balancing author fame and demographics, AttriBench enables controlled investigation of demographic bias in quote attribution. Using this dataset, we evaluate 11 widely used LLMs across different prompt settings and find that quote attribution remains a challenging task even for frontier models. We observe large and systematic disparities in attribution accuracy between race, gender, and intersectional groups. We further introduce and investigate suppression, a distinct failure mode in which models omit attribution entirely, even when the model has access to authorship information. We find that suppression is widespread and unevenly distributed across demographic groups, revealing systematic biases not captured by standard accuracy metrics. Our results position quote attribution as a benchmark for representational fairness in LLMs.
Problem

Research questions and friction points this paper is trying to address.

attribution bias
large language models
demographic bias
quote attribution
representational fairness
Innovation

Methods, ideas, or system contributions that make the work stand out.

quote attribution
demographic bias
suppression
representational fairness
AttriBench
🔎 Similar Papers
No similar papers found.