🤖 AI Summary
This study addresses the tendency of current large language models (LLMs) to overlook the cultural background of emotion expressers, erroneously assuming emotional universality and thereby introducing interpretive biases in cross-cultural emotion understanding. To mitigate this, the work proposes a Generator–Interpreter dual-perspective framework that explicitly incorporates the generator’s cultural viewpoint. The authors systematically evaluate six prominent LLMs on an emotion attribution task across 15 countries, leveraging a newly constructed multicultural emotion dataset and alignment analyses. Findings reveal that model performance is significantly modulated by both emotion type and cultural context, with the generator’s nationality exerting a stronger influence on attribution accuracy than the interpreter’s perspective. These results substantiate a dual-perspective alignment effect, offering a novel pathway toward more equitable and robust cross-cultural emotion understanding in artificial intelligence systems.
📝 Abstract
Large language models (LLMs) are increasingly used in cross-cultural systems to understand and adapt to human emotions, which are shaped by cultural norms of expression and interpretation. However, prior work on emotion attribution has focused mainly on interpretation, overlooking the cultural background of emotion generators. This assumption of universality neglects variation in how emotions are expressed and perceived across nations. To address this gap, we propose a Generator-Interpreter framework that captures dual perspectives of emotion attribution by considering both expression and interpretation. We systematically evaluate six LLMs on an emotion attribution task using data from 15 countries. Our analysis reveals that performance variations depend on the emotion type and cultural context. Generator-interpreter alignment effects are present; the generator's country of origin has a stronger impact on performance. We call for culturally sensitive emotion modeling in LLM-based systems to improve robustness and fairness in emotion understanding across diverse cultural contexts.