Social Bias in Popular Question-Answering Benchmarks

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Leading question-answering and reading comprehension benchmarks exhibit significant demographic and geographic representation biases, stemming from insufficient diversity and transparency among dataset creators and annotators—thereby compromising the validity of LLM knowledge evaluation. Method: This study conducts the first cross-dimensional bias audit across 30 benchmark papers and 20 datasets, introducing a “creator–annotator–content” triadic analytical framework that integrates qualitative content analysis with quantitative statistical tests (e.g., gender, religion, and regional representation). Contribution/Results: We find that 95% of benchmarks fail to disclose annotator demographics; only one implements explicit fairness measures; and systematic representational imbalances are pervasive. Our work establishes a reproducible, multidimensional bias assessment paradigm for benchmark development and advances data governance toward inclusivity and full provenance.

Technology Category

Application Category

📝 Abstract
Question-answering (QA) and reading comprehension (RC) benchmarks are essential for assessing the capabilities of large language models (LLMs) in retrieving and reproducing knowledge. However, we demonstrate that popular QA and RC benchmarks are biased and do not cover questions about different demographics or regions in a representative way, potentially due to a lack of diversity of those involved in their creation. We perform a qualitative content analysis of 30 benchmark papers and a quantitative analysis of 20 respective benchmark datasets to learn (1) who is involved in the benchmark creation, (2) how social bias is addressed or prevented, and (3) whether the demographics of the creators and annotators correspond to particular biases in the content. Most analyzed benchmark papers provided insufficient information regarding the stakeholders involved in benchmark creation, particularly the annotators. Notably, just one of the benchmark papers explicitly reported measures taken to address social representation issues. Moreover, the data analysis revealed gender, religion, and geographic biases across a wide range of encyclopedic, commonsense, and scholarly benchmarks. More transparent and bias-aware QA and RC benchmark creation practices are needed to facilitate better scrutiny and incentivize the development of fairer LLMs.
Problem

Research questions and friction points this paper is trying to address.

Popular QA benchmarks lack demographic and regional representation
Social bias in benchmarks due to creator diversity gaps
Need transparent bias-aware practices for fairer LLM evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Qualitative content analysis of benchmark papers
Quantitative analysis of benchmark datasets
Identified gender, religion, geographic biases
🔎 Similar Papers
No similar papers found.
A
Angelie Kraft
University of Hamburg, Leuphana University Lüneburg, Weizenbaum Insitute
J
Judith Simon
University of Hamburg
Sonja Schimmler
Sonja Schimmler
Fraunhofer FOKUS