🤖 AI Summary
This study investigates the alignment between large language models’ (LLMs) generated social roles and human social cognition in low-resource settings, using Bangladesh as a representative case. Methodologically, we designed a culturally grounded questionnaire to compare human participants’ responses with those of eight state-of-the-art LLMs across multidimensional social identity tasks, constructing a quantitative evaluation matrix spanning affective valence, personality perception, and social trustworthiness. Results reveal a pervasive “Pollyanna effect” in LLM outputs—significantly higher affective scores than humans (5.99 vs. 5.60)—yet systematically lower performance in affective authenticity and social trustworthiness; all LLMs underperformed humans across every dimension. This work provides the first empirical evidence of socially biased role representations by LLMs in low-resource contexts and establishes the necessity of local human data for calibrating AI-generated social roles. It thereby lays a methodological foundation and practical pathway for social-science-informed AI role modeling.
📝 Abstract
Recent advances enable Large Language Models (LLMs) to generate AI personas, yet their lack of deep contextual, cultural, and emotional understanding poses a significant limitation. This study quantitatively compared human responses with those of eight LLM-generated social personas (e.g., Male, Female, Muslim, Political Supporter) within a low-resource environment like Bangladesh, using culturally specific questions. Results show human responses significantly outperform all LLMs in answering questions, and across all matrices of persona perception, with particularly large gaps in empathy and credibility. Furthermore, LLM-generated content exhibited a systematic bias along the lines of the ``Pollyanna Principle'', scoring measurably higher in positive sentiment ($Phi_{avg} = 5.99$ for LLMs vs. $5.60$ for Humans). These findings suggest that LLM personas do not accurately reflect the authentic experience of real people in resource-scarce environments. It is essential to validate LLM personas against real-world human data to ensure their alignment and reliability before deploying them in social science research.