The Prompt Makes the Person(a): A Systematic Evaluation of Sociodemographic Persona Prompting for Large Language Models

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit systematic fidelity biases when simulating perspectives of marginalized groups—such as non-binary, Hispanic, and Middle Eastern individuals—with bias severity highly contingent on persona prompt design. Method: We conduct controlled open- and closed-ended task experiments across five open-source LLMs, systematically evaluating the impact of demographic priming strategies and role-immersion prompt formats. Contribution/Results: Identity priming via culturally grounded names combined with “discovery-oriented interview” prompts significantly reduces stereotyping and improves representational accuracy for intersectionally marginalized identities. Notably, smaller models—including OLMo-2-7B—outperform larger ones (e.g., Llama-3.3-70B) on specific tasks, challenging the scale-equals-capability assumption. These findings yield a reproducible, empirically validated prompting paradigm for enhancing LLMs’ sociocultural inclusivity and fidelity to underrepresented perspectives.

Technology Category

Application Category

📝 Abstract
Persona prompting is increasingly used in large language models (LLMs) to simulate views of various sociodemographic groups. However, how a persona prompt is formulated can significantly affect outcomes, raising concerns about the fidelity of such simulations. Using five open-source LLMs, we systematically examine how different persona prompt strategies, specifically role adoption formats and demographic priming strategies, influence LLM simulations across 15 intersectional demographic groups in both open- and closed-ended tasks. Our findings show that LLMs struggle to simulate marginalized groups, particularly nonbinary, Hispanic, and Middle Eastern identities, but that the choice of demographic priming and role adoption strategy significantly impacts their portrayal. Specifically, we find that prompting in an interview-style format and name-based priming can help reduce stereotyping and improve alignment. Surprisingly, smaller models like OLMo-2-7B outperform larger ones such as Llama-3.3-70B. Our findings offer actionable guidance for designing sociodemographic persona prompts in LLM-based simulation studies.
Problem

Research questions and friction points this paper is trying to address.

Evaluates how persona prompts affect LLM simulations of sociodemographic groups
Examines strategies to reduce stereotyping in marginalized group portrayals
Compares performance of different LLM sizes in persona simulations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interview-style format reduces stereotyping in personas
Name-based priming improves demographic alignment
Smaller models outperform larger ones in simulations