🤖 AI Summary
This study identifies systematic biases in large language models (LLMs) when generating queer narratives—manifesting as thematic narrowing, pronounced negative valence, and discursive othering—thereby structurally compressing queer lived experiences. Method: We propose a novel tri-dimensional analytical framework—“harmful representation, narrow representation, discursive othering”—to systematically interrogate structural constraints on queer narrative generation in LLMs. Integrating computational text analysis with critical social science methodologies, we design controlled persona-generation experiments to compare representational breadth and complexity across queer and mainstream demographic groups. Contribution/Results: Empirical findings demonstrate that LLMs consistently produce significantly less diverse and less nuanced portrayals of queer individuals than of non-queer counterparts. The study establishes a new paradigm for AI fairness evaluation grounded in intersectional, discourse-aware metrics and advances actionable guidelines for responsible generative AI development.
📝 Abstract
One way social groups are marginalized in discourse is that the narratives told about them often default to a narrow, stereotyped range of topics. In contrast, default groups are allowed the full complexity of human existence. We describe the constrained representations of queer people in LLM generations in terms of harmful representations, narrow representations, and discursive othering and formulate hypotheses to test for these phenomena. Our results show that LLMs are significantly limited in their portrayals of queer personas.