Queer NLP: A Critical Survey on Literature Gaps, Biases and Trends

📅 2026-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the systemic biases in natural language processing (NLP) technologies that disproportionately affect marginalized groups, particularly LGBTQIA+ communities, an issue insufficiently examined in existing literature. For the first time, it offers a systematic review of all NLP research explicitly engaging with LGBTQIA+ topics in the ACL anthology, drawing on queer theory and employing qualitative content analysis within a critical theoretical framework. The analysis reveals that current approaches predominantly focus on post-hoc bias detection rather than proactive inclusive design. In response, this work proposes a forward-looking agenda emphasizing meaningful stakeholder engagement, intersectional analysis, exploration of non-English linguistic contexts, and deeper interdisciplinary collaboration. These recommendations aim to chart a pathway toward more equitable and inclusive NLP systems.

Technology Category

Application Category

📝 Abstract
Natural language processing (NLP) technologies are rapidly reshaping how language is created, processed, and analyzed by humans. With current and potential applications in hiring, law, healthcare, and other areas that impact people's lives, understanding and mitigating harms towards marginalized groups is critical. In this survey, we examine NLP research papers that explicitly address the relationship between LGBTQIA+ communities and NLP technologies. We systematically review all such papers published in the ACL Anthology, to answer the following research questions: (1) What are current research trends? (2) What gaps exist in terms of topics and methods? (3) What areas are open for future work? We find that while the number of papers on queer NLP has grown within the last few years, most papers take a reactive rather than a proactive approach, pointing out bias more often than mitigating it, and focusing on shortcomings of existing systems rather than creating new solutions. Our survey uncovers many opportunities for future work, especially regarding stakeholder involvement, intersectionality, interdisciplinarity, and languages other than English. We also offer an outlook from a queer studies perspective, highlighting understudied topics and gaps in the harms addressed in NLP papers. Beyond being a roadmap of what has been done, this survey is a call to action for work towards more just and inclusive NLP technologies.
Problem

Research questions and friction points this paper is trying to address.

Queer NLP
LGBTQIA+
bias in NLP
marginalized groups
inclusive NLP
Innovation

Methods, ideas, or system contributions that make the work stand out.

Queer NLP
bias mitigation
intersectionality
stakeholder involvement
inclusive AI
🔎 Similar Papers
No similar papers found.
Sabine Weber
Sabine Weber
PhD Student, ILCC, School of Informatics, University of Edinburgh
Natural Language ProcessingComputational LinguisticsArtificial IntelligenceMachine Learning
Angelina Wang
Angelina Wang
Cornell Tech
machine learning fairnessevaluation and measurement
A
Ankush Gupta
IIIT-DELHI, PayGlocal
Arjun Subramonian
Arjun Subramonian
FAIR (Meta Platforms Inc.)
AIsocietyfairness
Dennis Ulmer
Dennis Ulmer
University of Amsterdam
Natural Language ProcessingMachine LearningDeep LearningGeneralizationUncertainty
Eshaan Tanwar
Eshaan Tanwar
Københavns Universitet
Natural Language Processing
Geetanjali Aich
Geetanjali Aich
MS CS Student, University of Massachusetts Amherst
Image ProcessingComputer VisionSatellite Image ProcessingNeuroscienceBrain Complex Network
H
Hannah Devinney
Dept. of Thematic Studies, Unit of Gender Studies, Linköping University
J
Jacob Hobbs
Queer in AI
Jennifer Mickel
Jennifer Mickel
UT Austin
Joshua Tint
Joshua Tint
Arizona State University
natural language processingfairness in machine learningsociotechnical issues
M
Mae Sosto
Centrum Wiskunde & Informatica
Ray Groshan
Ray Groshan
University of Maryland, Baltimore County
NLPAssistive Technology
S
Simone Astarita
Queer in AI
Vagrant Gautam
Vagrant Gautam
Heidelberg Institute for Theoretical Studies
natural language processingcomputational linguisticstrustworthy nlpfairnessreasoning
Verena Blaschke
Verena Blaschke
LMU Munich
linguistic variationnatural language processingquantitative linguistics
William Agnew
William Agnew
Postdoc, Carnegie Mellon University
Artificial IntelligenceAlgorithmsSecurity
W
Wilson Y Lee
HubSpot
Yanan Long
Yanan Long
University of Chicago
AI for ScienceBayesian StatisticsGeometric Deep LearningNatural Language ProcessingAI Ethics