Misalignment of LLM-Generated Personas with Human Perceptions in Low-Resource Settings

📅 2025-11-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the alignment between large language models’ (LLMs) generated social roles and human social cognition in low-resource settings, using Bangladesh as a representative case. Methodologically, we designed a culturally grounded questionnaire to compare human participants’ responses with those of eight state-of-the-art LLMs across multidimensional social identity tasks, constructing a quantitative evaluation matrix spanning affective valence, personality perception, and social trustworthiness. Results reveal a pervasive “Pollyanna effect” in LLM outputs—significantly higher affective scores than humans (5.99 vs. 5.60)—yet systematically lower performance in affective authenticity and social trustworthiness; all LLMs underperformed humans across every dimension. This work provides the first empirical evidence of socially biased role representations by LLMs in low-resource contexts and establishes the necessity of local human data for calibrating AI-generated social roles. It thereby lays a methodological foundation and practical pathway for social-science-informed AI role modeling.

Technology Category

Application Category

📝 Abstract
Recent advances enable Large Language Models (LLMs) to generate AI personas, yet their lack of deep contextual, cultural, and emotional understanding poses a significant limitation. This study quantitatively compared human responses with those of eight LLM-generated social personas (e.g., Male, Female, Muslim, Political Supporter) within a low-resource environment like Bangladesh, using culturally specific questions. Results show human responses significantly outperform all LLMs in answering questions, and across all matrices of persona perception, with particularly large gaps in empathy and credibility. Furthermore, LLM-generated content exhibited a systematic bias along the lines of the ``Pollyanna Principle'', scoring measurably higher in positive sentiment ($Phi_{avg} = 5.99$ for LLMs vs. $5.60$ for Humans). These findings suggest that LLM personas do not accurately reflect the authentic experience of real people in resource-scarce environments. It is essential to validate LLM personas against real-world human data to ensure their alignment and reliability before deploying them in social science research.
Problem

Research questions and friction points this paper is trying to address.

LLM personas misalign with human perceptions in low-resource settings
LLMs show biases like the Pollyanna Principle in sentiment analysis
Validation against real human data is needed for reliable LLM personas
Innovation

Methods, ideas, or system contributions that make the work stand out.

Used culturally specific questions for persona comparison
Quantitatively measured empathy and credibility gaps
Applied Pollyanna Principle to detect systematic bias
🔎 Similar Papers
No similar papers found.
Tabia Tanzin Prama
Tabia Tanzin Prama
Phd Student of Computer Science
Data MiningNLPHealth InformaticsAI Ethics
C
Christopher M. Danforth
Computational Story Lab, Vermont Complex Systems Institute, Vermont Advanced Computing Center, Department of Mathematics and Statistics, University of Vermont, Burlington, VT 05405, USA
P
P. Dodds
Computational Story Lab, Vermont Complex Systems Institute, Vermont Advanced Computing Center, Department of Computer Science, University of Vermont, Burlington, VT 05405, USA, and Santa Fe Institute, 1399 Hyde Park Rd, Santa Fe, NM 87501, USA