🤖 AI Summary
Leading question-answering and reading comprehension benchmarks exhibit significant demographic and geographic representation biases, stemming from insufficient diversity and transparency among dataset creators and annotators—thereby compromising the validity of LLM knowledge evaluation. Method: This study conducts the first cross-dimensional bias audit across 30 benchmark papers and 20 datasets, introducing a “creator–annotator–content” triadic analytical framework that integrates qualitative content analysis with quantitative statistical tests (e.g., gender, religion, and regional representation). Contribution/Results: We find that 95% of benchmarks fail to disclose annotator demographics; only one implements explicit fairness measures; and systematic representational imbalances are pervasive. Our work establishes a reproducible, multidimensional bias assessment paradigm for benchmark development and advances data governance toward inclusivity and full provenance.
📝 Abstract
Question-answering (QA) and reading comprehension (RC) benchmarks are essential for assessing the capabilities of large language models (LLMs) in retrieving and reproducing knowledge. However, we demonstrate that popular QA and RC benchmarks are biased and do not cover questions about different demographics or regions in a representative way, potentially due to a lack of diversity of those involved in their creation. We perform a qualitative content analysis of 30 benchmark papers and a quantitative analysis of 20 respective benchmark datasets to learn (1) who is involved in the benchmark creation, (2) how social bias is addressed or prevented, and (3) whether the demographics of the creators and annotators correspond to particular biases in the content. Most analyzed benchmark papers provided insufficient information regarding the stakeholders involved in benchmark creation, particularly the annotators. Notably, just one of the benchmark papers explicitly reported measures taken to address social representation issues. Moreover, the data analysis revealed gender, religion, and geographic biases across a wide range of encyclopedic, commonsense, and scholarly benchmarks. More transparent and bias-aware QA and RC benchmark creation practices are needed to facilitate better scrutiny and incentivize the development of fairer LLMs.