🤖 AI Summary
This study systematically evaluates the representation of African American Language (AAL) across 12 major English pretraining corpora, revealing severe underrepresentation—down to 0.007% of documents—along with deficits in source diversity and naturalistic usage, far below the demographic proportion of AAL speakers in the U.S. Method: Employing a multimodal evaluation framework integrating quantitative corpus statistics, linguistic annotation, human judgment, and LLM-based risk assessment, the study quantifies downstream harms and filter biases. Contribution/Results: It is the first to empirically demonstrate that over 25% of AAL texts in C4 trigger stereotypical generations; further, mainstream toxicity and quality filters exhibit structural bias favoring White Mainstream English (WME), exacerbating linguistic inequity. The work introduces the critical concept of “data comics” to expose reductive and distorted AAL representations in training data and proposes a reproducible, inclusive evaluation framework for equitable dataset curation.
📝 Abstract
With a combination of quantitative experiments, human judgments, and qualitative analyses, we evaluate the quantity and quality of African American Language (AAL) representation in 12 predominantly English, open-source pretraining corpora. We specifically focus on the sources, variation, and naturalness of included AAL texts representing the AAL-speaking community. We find that AAL is underrepresented in all evaluated pretraining corpora compared to US demographics, constituting as little as 0.007% of documents. We also find that more than 25% of AAL texts in C4 may be inappropriate for LLMs to generate and reinforce harmful stereotypes. Finally, we find that most automated language, toxicity, and quality filters are more likely to conserve White Mainstream English (WME) texts over AAL in pretraining corpora.