🤖 AI Summary
This study investigates “representational harm” concerning gender in generative AI: although women’s presence in generated texts (e.g., biographies) has increased, their portrayals remain saturated with stereotypical lexical items and neoliberal discourse, reinforcing traditional gender roles and power asymmetries. Employing statistical significance testing (word frequency, co-occurrence, semantic bias), cross-model comparative experiments, and qualitative discourse analysis, the work provides the first empirical demonstration that quantitative representation gains do not entail qualitative fairness—unconstrained expansion of representation can exacerbate systemic bias. Results confirm that mainstream large language models exhibit stable, statistically significant gendered semantic biases across diverse prompts and domains. The study advances a novel paradigm for AI fairness evaluation and introduces an actionable diagnostic framework grounded in computational linguistics and critical discourse analysis, enabling granular, interpretable assessment of representational equity beyond surface-level metrics.
📝 Abstract
To recognize and mitigate the harms of generative AI systems, it is crucial to consider who is represented in the outputs of generative AI systems and how people are represented. A critical gap emerges when naively improving who is represented, as this does not imply bias mitigation efforts have been applied to address how people are represented. We critically examined this by investigating gender representation in occupation across state-of-the-art large language models. We first show evidence suggesting that over time there have been interventions to models altering the resulting gender distribution, and we find that women are more represented than men when models are prompted to generate biographies or personas. We then demonstrate that representational biases persist in how different genders are represented by examining statistically significant word differences across genders. This results in a proliferation of representational harms, stereotypes, and neoliberalism ideals that, despite existing interventions to increase female representation, reinforce existing systems of oppression.