Data Caricatures: On the Representation of African American Language in Pretraining Corpora

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the representation of African American Language (AAL) across 12 major English pretraining corpora, revealing severe underrepresentation—down to 0.007% of documents—along with deficits in source diversity and naturalistic usage, far below the demographic proportion of AAL speakers in the U.S. Method: Employing a multimodal evaluation framework integrating quantitative corpus statistics, linguistic annotation, human judgment, and LLM-based risk assessment, the study quantifies downstream harms and filter biases. Contribution/Results: It is the first to empirically demonstrate that over 25% of AAL texts in C4 trigger stereotypical generations; further, mainstream toxicity and quality filters exhibit structural bias favoring White Mainstream English (WME), exacerbating linguistic inequity. The work introduces the critical concept of “data comics” to expose reductive and distorted AAL representations in training data and proposes a reproducible, inclusive evaluation framework for equitable dataset curation.

Technology Category

Application Category

📝 Abstract
With a combination of quantitative experiments, human judgments, and qualitative analyses, we evaluate the quantity and quality of African American Language (AAL) representation in 12 predominantly English, open-source pretraining corpora. We specifically focus on the sources, variation, and naturalness of included AAL texts representing the AAL-speaking community. We find that AAL is underrepresented in all evaluated pretraining corpora compared to US demographics, constituting as little as 0.007% of documents. We also find that more than 25% of AAL texts in C4 may be inappropriate for LLMs to generate and reinforce harmful stereotypes. Finally, we find that most automated language, toxicity, and quality filters are more likely to conserve White Mainstream English (WME) texts over AAL in pretraining corpora.
Problem

Research questions and friction points this paper is trying to address.

Evaluates African American Language representation in pretraining corpora.
Identifies underrepresentation and harmful stereotypes in AAL texts.
Highlights bias in automated filters favoring White Mainstream English.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines quantitative experiments, human judgments, qualitative analyses
Evaluates AAL representation in pretraining corpora
Identifies bias in language and toxicity filters
Nicholas Deas
Nicholas Deas
Computer Science PhD Candidate, Columbia University
Natural Language ProcessingComputational Social ScienceSocial Psychology
B
Blake Vente
Columbia University, Department of Computer Science
Amith Ananthram
Amith Ananthram
Columbia University
NLPCVAI
J
Jessica A. Grieser
University of Michigan, Department of Linguistics
D
D. Patton
University of Pennsylvania, School of Social Policy and Practice, Annenberg School for Communications
S
Shana Kleiner
University of Pennsylvania, School of Social Policy and Practice, Annenberg School for Communications
J
James Shepard
University of Tennessee, Knoxville, Department of English
Kathleen McKeown
Kathleen McKeown
Professor of Computer Science and Director, Data Science Institute, Columbia University
Artificial IntelligenceNatural Language ProcessingText Summarization