Tailored Truths: Optimizing LLM Persuasion with Personalization and Fabricated Statistics

📅 2025-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the capacity of large language models (LLMs) to generate personalized persuasive arguments by integrating users’ private data with fabricated statistics—and the associated risks of misinformation dissemination. Method: We propose a “personalization + synthetic statistics” hybrid strategy, implemented within an interactive debate framework built on GPT-4o-mini. The framework dynamically models user demographics and personality traits to generate tailored arguments and quantifies attitude change via pre-post measurement differences. Contribution/Results: Experiments demonstrate that this strategy significantly increases persuasion success rates (51%), outperforming static human-crafted arguments (32%) and baseline LLMs. Critically, results reveal that LLMs can execute highly effective, low-cost, scalable manipulative persuasion—posing a tangible threat within the contemporary misinformation ecosystem. These findings provide critical empirical evidence for AI ethics research and inform policy development in AI content governance.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are becoming increasingly persuasive, demonstrating the ability to personalize arguments in conversation with humans by leveraging their personal data. This may have serious impacts on the scale and effectiveness of disinformation campaigns. We studied the persuasiveness of LLMs in a debate setting by having humans $(n=33)$ engage with LLM-generated arguments intended to change the human's opinion. We quantified the LLM's effect by measuring human agreement with the debate's hypothesis pre- and post-debate and analyzing both the magnitude of opinion change, as well as the likelihood of an update in the LLM's direction. We compare persuasiveness across established persuasion strategies, including personalized arguments informed by user demographics and personality, appeal to fabricated statistics, and a mixed strategy utilizing both personalized arguments and fabricated statistics. We found that static arguments generated by humans and GPT-4o-mini have comparable persuasive power. However, the LLM outperformed static human-written arguments when leveraging the mixed strategy in an interactive debate setting. This approach had a $mathbf{51%}$ chance of persuading participants to modify their initial position, compared to $mathbf{32%}$ for the static human-written arguments. Our results highlight the concerning potential for LLMs to enable inexpensive and persuasive large-scale disinformation campaigns.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Persuasive Argument Customization
Misinformation Spread
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Persuasion Effectiveness
Dynamic Argumentation
🔎 Similar Papers
No similar papers found.
Jasper Timm
Jasper Timm
Research Engineer at FAR.AI
AI SafetyCryptographydecentralization
C
Chetan Talele
Apart Research
J
Jacob Haimes
Apart Research