Bias is a Math Problem, AI Bias is a Technical Problem: 10-year Literature Review of AI/LLM Bias Research Reveals Narrow [Gender-Centric] Conceptions of 'Bias', and Academia-Industry Gap

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies three structural deficiencies in AI/LLM bias research through a systematic literature review of 189 papers from ACL, FAccT, NeurIPS, and AAAI (2014–2023). **Problem**: (1) Conceptual ambiguity—82% lack an explicit definition of “bias,” relying excessively on technical proxies; (2) Dimensional imbalance—79.9% focus narrowly on gender (primarily occupational stereotypes), while underrepresenting race (30.2%), age (20.6%), religion (19.1%), and nationality (13.2%), and largely excluding non-Western populations; (3) Industry–academia misalignment—only 10.6% propose deployable debiasing interventions, hindering real-world adoption. **Method**: Quantitative content analysis combined with critical thematic synthesis. **Contribution/Results**: First empirical quantification of the field’s dual limitations—“technocentric bias conceptualization” and “gender-centric framing”—and proposal of a socially grounded fairness framework integrating intersectional identities, alongside urgent emphasis on translational pathway design for equitable AI deployment.

Technology Category

Application Category

📝 Abstract
The rapid development of AI tools and implementation of LLMs within downstream tasks has been paralleled by a surge in research exploring how the outputs of such AI/LLM systems embed biases, a research topic which was already being extensively explored before the era of ChatGPT. Given the high volume of research around the biases within the outputs of AI systems and LLMs, it is imperative to conduct systematic literature reviews to document throughlines within such research. In this paper, we conduct such a review of research covering AI/LLM bias in four premier venues/organizations -- *ACL, FAccT, NeurIPS, and AAAI -- published over the past 10 years. Through a coverage of 189 papers, we uncover patterns of bias research and along what axes of human identity they commonly focus. The first emergent pattern within the corpus was that 82% (155/189) papers did not establish a working definition of "bias" for their purposes, opting instead to simply state that biases and stereotypes exist that can have harmful downstream effects while establishing only mathematical and technical definition of bias. 94 of these 155 papers have been published in the past 5 years, after Blodgett et al. (2020)'s literature review with a similar finding about NLP research and recommendation to consider how such researchers should conceptualize bias, going beyond strictly technical definitions. Furthermore, we find that a large majority of papers -- 79.9% or 151/189 papers -- focus on gender bias (mostly, gender and occupation bias) within the outputs of AI systems and LLMs. By demonstrating a strong focus within the field on gender, race/ethnicity (30.2%; 57/189), age (20.6%; 39/189), religion (19.1%; 36/189) and nationality (13.2%; 25/189) bias, we document how researchers adopt a fairly narrow conception of AI bias by overlooking several non-Western communities in fairness research, as we advocate for a stronger coverage of such populations. Finally, we note that while our corpus contains several examples of innovative debiasing methods across the aforementioned aspects of human identity, only 10.6% (20/189) include recommendations for how to implement their findings or contributions in real-world AI systems or design processes. This indicates a concerning academia-industry gap, especially since many of the biases that our corpus contains several successful mitigation methods that still persist within the outputs of AI systems and LLMs commonly used today. We conclude with recommendations towards future AI/LLM fairness research, with stronger focus on diverse marginalized populations.
Problem

Research questions and friction points this paper is trying to address.

Analyzing narrow conceptions of bias in AI/LLM research
Documenting academia-industry gap in bias mitigation implementation
Advocating broader coverage of non-Western communities in fairness research
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic literature review of AI bias research
Focus on gender, race, age, religion bias
Highlight academia-industry gap in debiasing
🔎 Similar Papers
No similar papers found.
S
Sourojit Ghosh
University of Washington, Seattle
Kyra Wilson
Kyra Wilson
PhD Student, University of Washington