Biased Tales: Cultural and Topic Bias in Generating Children's Stories

📅 2025-09-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study reveals significant cultural and gender biases in large language models (LLMs) generating children’s stories: female protagonists receive 55.26% more physical appearance descriptions, while non-Western child characters are systematically over-associated with heritage, tradition, and family themes. To systematically identify such biases, we construct the first cross-cultural–gender-annotated dataset for children’s narratives, combining expert annotation with quantitative analysis of protagonist attributes and thematic distributions. We propose a novel “thematic association strength” metric to quantify stereotypical topic binding between identity groups and narrative themes, enabling an interpretable bias diagnostic framework. Results confirm the structural presence of implicit social biases in creative AI and provide a reproducible evaluation paradigm—along with actionable mitigation strategies—to advance fairness and inclusivity in AI-generated children’s content.

Technology Category

Application Category

📝 Abstract
Stories play a pivotal role in human communication, shaping beliefs and morals, particularly in children. As parents increasingly rely on large language models (LLMs) to craft bedtime stories, the presence of cultural and gender stereotypes in these narratives raises significant concerns. To address this issue, we present Biased Tales, a comprehensive dataset designed to analyze how biases influence protagonists' attributes and story elements in LLM-generated stories. Our analysis uncovers striking disparities. When the protagonist is described as a girl (as compared to a boy), appearance-related attributes increase by 55.26%. Stories featuring non-Western children disproportionately emphasize cultural heritage, tradition, and family themes far more than those for Western children. Our findings highlight the role of sociocultural bias in making creative AI use more equitable and diverse.
Problem

Research questions and friction points this paper is trying to address.

Detecting cultural and gender stereotypes in LLM-generated children's stories
Analyzing bias in protagonist attributes and story elements across demographics
Addressing sociocultural bias to make creative AI more equitable and diverse
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes bias in LLM-generated children's stories
Uses comprehensive dataset to evaluate protagonist attributes
Reveals cultural and gender disparities in story elements
🔎 Similar Papers
No similar papers found.