🤖 AI Summary
This study addresses representational biases in large language models (LLMs) when processing African American Vernacular English (AAVE), which often manifest as misuses of characteristic grammatical features—such as “ain’t”—and the reproduction of racial stereotypes. It presents the first systematic evaluation of LLMs’ ability to accurately generate AAVE by integrating corpus-based analysis (using CoRAAL and TwitterAAE), prompt engineering, grammatical feature comparison, sentiment analysis, and human annotation to compare model outputs against authentic native speaker usage. Findings reveal that models consistently underrepresent or inaccurately deploy core AAVE syntactic structures, highlighting a critical lack of linguistic diversity in training data and underscoring the urgent need for fairness-oriented interventions. The work provides both empirical evidence and a methodological framework to advance more inclusive language technologies.
📝 Abstract
In AI, most evaluations of natural language understanding tasks are conducted in standardized dialects such as Standard American English (SAE). In this work, we investigate how accurately large language models (LLMs) represent African American Vernacular English (AAVE). We analyze three LLMs to compare their usage of AAVE to the usage of humans who natively speak AAVE. We first analyzed interviews from the Corpus of Regional African American Language and TwitterAAE to identify the typical contexts where people use AAVE grammatical features such as ain't. We then prompted the LLMs to produce text in AAVE and compared the model-generated text to human usage patterns. We find that, in many cases, there are substantial differences between AAVE usage in LLMs and humans: LLMs usually underuse and misuse grammatical features characteristic of AAVE. Furthermore, through sentiment analysis and manual inspection, we found that the models replicated stereotypes about African Americans. These results highlight the need for more diversity in training data and the incorporation of fairness methods to mitigate the perpetuation of stereotypes.