Prosocial Persuasion at Scale? Large Language Models Outperform Humans in Donation Appeals Across Levels of Personalization

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the effectiveness of large language model (LLM)-generated charitable fundraising appeals across varying levels of personalization, benchmarked against human-written messages. Through two preregistered online experiments, the research systematically evaluates generic, genuinely personalized, and falsely personalized appeals using behavioral donation outcomes, content engagement metrics, and subjective persuasiveness ratings. The findings provide the first empirical evidence that LLM-generated appeals significantly outperform human-crafted ones in increasing donation amounts, enhancing user engagement, and achieving higher persuasiveness scores. Moreover, genuine personalization further amplifies persuasive impact, whereas false personalization yields adverse effects, underscoring the critical role of authenticity in personalized messaging.
📝 Abstract
Large Language Models (LLMs) are increasingly regarded as having the potential to generate persuasive content at scale. While previous studies have focused on the risks associated with LLM-generated misinformation, the role of LLMs in enabling prosocial persuasion is still underexplored. We investigate whether donation appeals authored by LLMs are as effective as those written by humans across degrees of personalization. Two preregistered online experiments (Study 1: N = 658; Study 2: N = 642) manipulated Personalization (generic vs. personalized vs. falsely personalized) and Content source (human vs. LLM) and presented participants with donation appeals for charities. We assessed how participants distributed their bonus money across the charities, how they engaged with the donation appeals, and how persuasive they found them. In both experiments, LLM-generated content yielded more donations, resulted in higher engagement, and was rated as more persuasive than human-authored content. There was a gain associated with personalization (Study 2) and a penalty for false personalization (Study 1). Our results suggest that LLMs may be a suitable technology for generating content that can encourage prosocial behavior.
Problem

Research questions and friction points this paper is trying to address.

prosocial persuasion
large language models
donation appeals
personalization
persuasive content
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
prosocial persuasion
personalization
donation appeals
human-AI comparison
🔎 Similar Papers
No similar papers found.
J
John Pascal Caffier
Tilburg University
O
Olga Stavrova
Mannheim University & Tilburg University
Bennett Kleinberg
Bennett Kleinberg
Associate Professor
Behavioural Data ScienceComputational Social ScienceCrime ScienceNLPPsychology