🤖 AI Summary
This study investigates implicit gender narrative biases in stories generated by ChatGPT, Gemini, and Claude. Employing a controlled prompting framework grounded in Propp’s narrative functions and Freytag’s dramatic structure—combined with close reading—we conduct qualitative analysis across five dimensions: character gender distribution, descriptive attributes, behavioral patterns, plot progression, and relational configurations. Results reveal pervasive reproduction of traditional gender stereotypes across all models, particularly manifesting as absent or underdeveloped psychological interiority for female characters, systematic passivization of women, and power-imbalanced relational structures. The efficacy of close reading in uncovering deep-seated narrative bias is empirically validated. Methodologically, we propose a novel “multi-layer interpretive analytical framework” that moves beyond surface-level statistical metrics to integrate narrative structure and semantic practice for holistic bias assessment—offering a rigorous methodological foundation for AI content fairness governance. (149 words)
📝 Abstract
The paper explores the study of gender-based narrative biases in stories generated by ChatGPT, Gemini, and Claude. The prompt design draws on Propp's character classifications and Freytag's narrative structure. The stories are analyzed through a close reading approach, with particular attention to adherence to the prompt, gender distribution of characters, physical and psychological descriptions, actions, and finally, plot development and character relationships. The results reveal the persistence of biases - especially implicit ones - in the generated stories and highlight the importance of assessing biases at multiple levels using an interpretative approach.