Everything is Plausible: Investigating the Impact of LLM Rationales on Human Notions of Plausibility

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how rationale generation by large language models (LLMs) influences human judgment on commonsense multiple-choice questions. We introduce a novel paradigm: for the first time, LLMs autonomously generate supporting (PRO) and opposing (CON) rationales; these are evaluated in a large-scale, dual-perspective rationality assessment involving 3,000 human judgments and 13,600 LLM self-judgments. Statistical analysis reveals that PRO rationales significantly increase human confidence in correct answers, whereas CON rationales significantly decrease it; remarkably, LLMs themselves exhibit analogous sensitivity. Our work demonstrates that—even in domains where humans excel, such as commonsense reasoning—LLM-generated reasoning outputs can systematically reshape human beliefs. This constitutes the first empirical evidence and methodological innovation for studying AI’s influence on human inference, highlighting critical cognitive risks in human–AI interaction.

Technology Category

Application Category

📝 Abstract
We investigate the degree to which human plausibility judgments of multiple-choice commonsense benchmark answers are subject to influence by (im)plausibility arguments for or against an answer, in particular, using rationales generated by LLMs. We collect 3,000 plausibility judgments from humans and another 13,600 judgments from LLMs. Overall, we observe increases and decreases in mean human plausibility ratings in the presence of LLM-generated PRO and CON rationales, respectively, suggesting that, on the whole, human judges find these rationales convincing. Experiments with LLMs reveal similar patterns of influence. Our findings demonstrate a novel use of LLMs for studying aspects of human cognition, while also raising practical concerns that, even in domains where humans are ``experts'' (i.e., common sense), LLMs have the potential to exert considerable influence on people's beliefs.
Problem

Research questions and friction points this paper is trying to address.

Investigating how LLM rationales influence human plausibility judgments
Measuring changes in human ratings when exposed to PRO/CON arguments
Examining LLM potential to sway human beliefs in commonsense domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using LLM rationales to influence human plausibility judgments
Collecting human and LLM judgments for comparative analysis
Demonstrating LLM impact on human cognition in commonsense domains
🔎 Similar Papers
2024-01-24Nature Machine IntelligenceCitations: 7