Replicating Human Motivated Reasoning Studies with LLMs

📅 2026-01-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether foundational large language models (LLMs) can replicate human motivated reasoning in political contexts. By systematically reproducing four classic human experiments and employing prompt engineering alongside standardized task designs, the authors evaluate whether these models exhibit motivation-driven biases in information processing akin to those observed in human cognition. The findings reveal that LLMs significantly diverge from human judgments in assessing argument strength and display no individual variability, suggesting an inability to effectively simulate human-like motivated reasoning. This work represents the first systematic examination of the alignment between LLMs and human behavior in such high-level cognitive tasks, highlighting critical cognitive limitations and potential risks when deploying current models in automated social science research.

Technology Category

Application Category

📝 Abstract
Motivated reasoning -- the idea that individuals processing information may be motivated to reach a certain conclusion, whether it be accurate or predetermined -- has been well-explored as a human phenomenon. However, it is unclear whether base LLMs mimic these motivational changes. Replicating 4 prior political motivated reasoning studies, we find that base LLM behavior does not align with expected human behavior. Furthermore, base LLM behavior across models shares some similarities, such as smaller standard deviations and inaccurate argument strength assessments. We emphasize the importance of these findings for researchers using LLMs to automate tasks such as survey data collection and argument assessment.
Problem

Research questions and friction points this paper is trying to address.

motivated reasoning
large language models
human behavior replication
political reasoning
cognitive bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

motivated reasoning
large language models
cognitive bias replication
argument assessment
human-AI alignment
🔎 Similar Papers
No similar papers found.