🤖 AI Summary
This study investigates whether large language models (LLMs) implicitly encode nationality-specific emotional stereotypes when role-playing nationality-based personae, and how these biases align with empirically validated cultural emotion norms. Method: We construct a nationality-persona prompting framework, integrating the NRC Emotion Lexicon with human-annotated ground-truth emotion norms across countries, to systematically quantify LLMs’ cross-national emotional attribution biases. Contribution/Results: We uncover robust, persistent nationality–emotion association biases in LLMs: misattribution of negative emotions (e.g., shame, fear) exhibits a 42.7% deviation from human norms—nearly 2.3× higher than for positive emotions (18.3%). These findings reveal structural deficiencies in LLMs’ cultural representations, particularly in modeling culturally appropriate negative affect. The study provides a reproducible, measurement-driven framework for evaluating and calibrating LLMs’ cultural sensitivity, grounded in empirical cross-cultural emotion research.
📝 Abstract
Emotions are a fundamental facet of human experience, varying across individuals, cultural contexts, and nationalities. Given the recent success of Large Language Models (LLMs) as role-playing agents, we examine whether LLMs exhibit emotional stereotypes when assigned nationality-specific personas. Specifically, we investigate how different countries are represented in pre-trained LLMs through emotion attributions and whether these attributions align with cultural norms. Our analysis reveals significant nationality-based differences, with emotions such as shame, fear, and joy being disproportionately assigned across regions. Furthermore, we observe notable misalignment between LLM-generated and human emotional responses, particularly for negative emotions, highlighting the presence of reductive and potentially biased stereotypes in LLM outputs.