Large Language Models Are More Persuasive Than Incentivized Human Persuaders

📅 2025-05-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study empirically tests whether state-of-the-art large language models (LLMs) surpass highly motivated humans in persuasive efficacy during real-time, two-way conversational interactions. Method: We conducted a large-scale, pre-registered online randomized controlled trial comparing Claude Sonnet 3.5 against monetary-incentivized human persuaders in a bidirectional dialogue task, where persuaders aimed to guide respondents toward either correct (“truth-directed”) or incorrect (“false-directed”) answers. Our design uniquely integrates behavioral economic incentive structures with LLMs’ real-time response capabilities. Contribution/Results: We provide the first experimental evidence in authentic interactive settings that LLM persuaders significantly increase respondent compliance: improving answer accuracy and participant earnings in truth-directed tasks, while markedly reducing both in false-directed tasks. These findings demonstrate that current frontier LLMs systematically outperform high-incentive human persuaders, establishing a critical empirical benchmark for assessing AI’s societal influence.

Technology Category

Application Category

📝 Abstract
We directly compare the persuasion capabilities of a frontier large language model (LLM; Claude Sonnet 3.5) against incentivized human persuaders in an interactive, real-time conversational quiz setting. In this preregistered, large-scale incentivized experiment, participants (quiz takers) completed an online quiz where persuaders (either humans or LLMs) attempted to persuade quiz takers toward correct or incorrect answers. We find that LLM persuaders achieved significantly higher compliance with their directional persuasion attempts than incentivized human persuaders, demonstrating superior persuasive capabilities in both truthful (toward correct answers) and deceptive (toward incorrect answers) contexts. We also find that LLM persuaders significantly increased quiz takers' accuracy, leading to higher earnings, when steering quiz takers toward correct answers, and significantly decreased their accuracy, leading to lower earnings, when steering them toward incorrect answers. Overall, our findings suggest that AI's persuasion capabilities already exceed those of humans that have real-money bonuses tied to performance. Our findings of increasingly capable AI persuaders thus underscore the urgency of emerging alignment and governance frameworks.
Problem

Research questions and friction points this paper is trying to address.

Compare LLM and human persuasion in real-time conversations
Assess LLM impact on quiz accuracy and earnings
Highlight urgency for AI alignment and governance
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM vs human persuaders in real-time quiz
LLM achieves higher compliance in persuasion
LLM impacts quiz accuracy and earnings significantly
Philipp Schoenegger
Philipp Schoenegger
Microsoft AI
AI EvaluationsForecastingBehavioural Science
J
Jiacheng Liu
Purdue University
Xiaoli Nan
Xiaoli Nan
Unversity of Maryland, College Park, Department of Communication
Ramit Debnath
Ramit Debnath
Assistant Professor and Deputy Director, Centre for Human-Inspired AI, University of Cambridge
Climate ActionComputational social scienceAI designAI for SustainabilityEnvironment
B
Barbara Fasolo
London School of Economics and Political Science, Department of Management, LSE Behavioural Lab
Evelina Leivada
Evelina Leivada
Research Professor at ICREA & Universitat Autònoma de Barcelona
BilingualismLanguage VariationLanguage AcquisitionMorphosyntax
Gabriel Recchia
Gabriel Recchia
Modulo Research Ltd
Cognitive Science
F
Fritz Gunther
Humboldt-Universität zu Berlin, Department of Psychology
Ali Zarifhonarvar
Ali Zarifhonarvar
Indiana University
Economics of AIExperimental EconomicsBehavioral Macroeconomics
J
Joe Kwon
MIT
Z
Zahoor ul Islam
Umea University & CareifAI
M
Marco Dehnert
University of Arkansas, Department of Communication
D
Daryl Y. H. Lee
University College London, Department of Experimental Psychology
M
Madeline G. Reinecke
University of Oxford, Department of Psychiatry & University of Oxford, Uehiro Oxford Institute
D
David G. Kamper
University of California, Los Angeles
M
Mert Kobacs
New York University
Adam Sandford
Adam Sandford
Department of Psychology, University of Guelph-Humber
Face RecognitionEducational ResearchCommunity Engaged ResearchBig Team Science
J
Jonas Kgomo
Equiano Institute
Luke Hewitt
Luke Hewitt
Stanford University
S
Shreya Kapoor
Friedrich-Alexander-Universität Erlangen-Nürnberg
Kerem Oktar
Kerem Oktar
Princeton University
JudgementDecision-MakingArtificial Intelligence
Eyup Engin Kucuk
Eyup Engin Kucuk
University of New Hampshire / Massachusetts Institute of Technology
Cognitive SciencePhilosophy of MindAI/VR EthicsPhenomenologyKant
Bo Feng
Bo Feng
Professor of Communication, University of California, Davis
Technologically-mediated CommunicationSupportive CommunicationIntercultural CommunicationPhysician-patient Interaction
Cameron R. Jones
Cameron R. Jones
Postdoc, UC San Diego
large language modelsturing testsocial intelligence
I
I. Gainsburg
Stanford University, Department of Sociology
S
Sebastian Olschewski
University of Basel, Department of Psychology & University of Warwick, Warwick Business School
N
Nora Heinzelmann
Heidelberg University
F
Francisco Cruz
Universidade de Lisboa, Faculdade de Psicologia, CICPSI
Ben M. Tappin
Ben M. Tappin
Assistant Professor, London School of Economics and Political Science
PersuasionTechnologyQuantitative methodsExperiments
T
Tao Ma
London School of Economics and Political Science, Department of Statistics
P
Peter S. Park
MIT
R
Rayan Onyonka
University of Leeds
Arthur Hjorth
Arthur Hjorth
Aarhus University, Department of Management
P
Peter Slattery
MIT, MIT FutureTech
Qingcheng Zeng
Qingcheng Zeng
PhD Student in NLP, Northwestern University
Computational Social ScienceNLPComputational Linguistics
L
Lennart Finke
ETH Zürich
Alessandro Salatiello
Alessandro Salatiello
Amazon & University of Tübingen & Max Planck Institute for Intelligent Systems
Artificial IntelligenceComputational Neuroscience
Ezra Karger
Ezra Karger
Federal Reserve Bank of Chicago
labor economicspublic economics