Persona-Assigned Large Language Models Exhibit Human-Like Motivated Reasoning

📅 2025-06-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) exhibit human-like motivated reasoning—i.e., systematically favoring conclusions aligned with an assigned sociopolitical identity (e.g., partisan affiliation, demographic attributes)—thereby compromising judgmental objectivity. The authors construct multidimensional personas across eight open- and closed-weight LLMs and adapt established human motivated-reasoning paradigms to two high-stakes tasks: fake news detection and numerical evidence evaluation. Results provide the first empirical evidence that identity priming significantly distorts model judgments: factual accuracy drops by up to 9 percentage points, while recognition of ideologically congruent “truths” increases by nearly 90%. Crucially, standard debiasing prompts fail to mitigate this effect. These findings reveal a previously undocumented, systematic identity-driven bias in LLMs’ higher-order cognitive reasoning, with critical implications for misinformation mitigation and trustworthy AI evaluation.

Technology Category

Application Category

📝 Abstract
Reasoning in humans is prone to biases due to underlying motivations like identity protection, that undermine rational decision-making and judgment. This motivated reasoning at a collective level can be detrimental to society when debating critical issues such as human-driven climate change or vaccine safety, and can further aggravate political polarization. Prior studies have reported that large language models (LLMs) are also susceptible to human-like cognitive biases, however, the extent to which LLMs selectively reason toward identity-congruent conclusions remains largely unexplored. Here, we investigate whether assigning 8 personas across 4 political and socio-demographic attributes induces motivated reasoning in LLMs. Testing 8 LLMs (open source and proprietary) across two reasoning tasks from human-subject studies -- veracity discernment of misinformation headlines and evaluation of numeric scientific evidence -- we find that persona-assigned LLMs have up to 9% reduced veracity discernment relative to models without personas. Political personas specifically, are up to 90% more likely to correctly evaluate scientific evidence on gun control when the ground truth is congruent with their induced political identity. Prompt-based debiasing methods are largely ineffective at mitigating these effects. Taken together, our empirical findings are the first to suggest that persona-assigned LLMs exhibit human-like motivated reasoning that is hard to mitigate through conventional debiasing prompts -- raising concerns of exacerbating identity-congruent reasoning in both LLMs and humans.
Problem

Research questions and friction points this paper is trying to address.

Investigates if persona-assigned LLMs show human-like biased reasoning
Examines reduced veracity discernment in LLMs due to political personas
Assesses inefficacy of prompt-based debiasing for identity-congruent reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Persona-assigned LLMs show human-like reasoning biases
Political personas influence scientific evidence evaluation
Prompt-based debiasing fails to mitigate persona effects
🔎 Similar Papers
No similar papers found.