🤖 AI Summary
This study reveals significant cultural and gender biases in large language models (LLMs) generating children’s stories: female protagonists receive 55.26% more physical appearance descriptions, while non-Western child characters are systematically over-associated with heritage, tradition, and family themes. To systematically identify such biases, we construct the first cross-cultural–gender-annotated dataset for children’s narratives, combining expert annotation with quantitative analysis of protagonist attributes and thematic distributions. We propose a novel “thematic association strength” metric to quantify stereotypical topic binding between identity groups and narrative themes, enabling an interpretable bias diagnostic framework. Results confirm the structural presence of implicit social biases in creative AI and provide a reproducible evaluation paradigm—along with actionable mitigation strategies—to advance fairness and inclusivity in AI-generated children’s content.
📝 Abstract
Stories play a pivotal role in human communication, shaping beliefs and morals, particularly in children. As parents increasingly rely on large language models (LLMs) to craft bedtime stories, the presence of cultural and gender stereotypes in these narratives raises significant concerns. To address this issue, we present Biased Tales, a comprehensive dataset designed to analyze how biases influence protagonists' attributes and story elements in LLM-generated stories. Our analysis uncovers striking disparities. When the protagonist is described as a girl (as compared to a boy), appearance-related attributes increase by 55.26%. Stories featuring non-Western children disproportionately emphasize cultural heritage, tradition, and family themes far more than those for Western children. Our findings highlight the role of sociocultural bias in making creative AI use more equitable and diverse.